German-engineered bot gets an AI upgrade for potential use in space colonization
When humans are finally ready to relocate civilization to Mars, they won’t be able to do it alone. They’ll need trusted specialists with encyclopedic knowledge, composure under pressure and extreme endurance — droids like Justin.
Built by the German space agency DLR, such humanoid bots are being groomed to build the first martian habitat for humans. Engineers have been refining Justin’s physical abilities for a decade; the mech can handle tools, shoot and upload photos, catch flying objects and navigate obstacles.
Now, thanks to new AI upgrades, Justin can think for itself. Unlike most robots, which have to be programmed in advance and given explicit instructions for nearly every movement, this bot can autonomously perform complex tasks — even those it hasn’t been programmed to do — on a planet’s surface while being supervised by astronauts in orbit.
Object recognition software and computer vision let Justin survey its environment and undertake jobs such as cleaning and maintaining machinery, inspecting equipment, and carrying objects. In a recent test, Justin fixed a faulty solar panel in a Munich lab in minutes, directed via tablet by an astronaut aboard the International Space Station. One small chore for Justin, one giant leap for future humankind.
AI finds an unexpected way to defeat classic video game
From The Verge:
AI research and video games are a match made in heaven. Researchers get a ready-made virtual environment with predefined goals they can control completely, and the AI agent gets to romp around without doing any damage. Sometimes, though, they do break things.
Case in point is a paper published this week by a trio of machine learning researchers from the University of Freiburg in Germany. They were exploring a particular method of teaching AI agents to navigate video games (in this case, desktop ports of old Atari titles from the 1980s) when they discovered something odd. The software they were testing discovered a bug in the arcade classic Q*bert that allowed it to rack up near infinite points. …
You can see what the bug looks like below, when the cubes start flashing:
It’s important to note, though, that the agent is not approaching this problem in the same way that a human would. It’s not actively looking for exploits in the game with some Matrix-like computer-vision. The paper is actually a test of a broad category of AI research known as “evolutionary algorithms.” …
AI destroys lawyers in legal competition
From Popular Mechanics:
Lawyers are pretty good at law-related activities. It’s their job, after all. But for at least one of those activities, lawyers are only second-best. A recent document-analyzing competition between lawyers and artificial intelligence ended with the AI as the clear victor.
The competition was managed by legal AI platform LawGeex, which trained its AI to read and interpret complex legal documents. LawGeex pitted that AI against law professors from Stanford, Duke, and the University of Southern California in a competition to read and interpret a collection of five non-disclosure agreements.
Both the humans and the AI were given four hours to read the contracts and identify over 30 legal terms and issues, including arbitration and confidentiality agreements. Both the human and AI participants were scored based on how accurate their assessments were.
While the human lawyers managed a respectable 85 per cent success rate, they were outperformed by the AI, which managed a full ten percentage points better. Even more impressively, human lawyers averaged 92 minutes to analyze the contracts, the AI did it in only 26 seconds.
See more IT & Tech innovation stories and let us know the interesting technology stories you come across.