During the festivities at the GTC conference in the fall of 2022, Nvidia took the new robotics-related hardware and services for companies developing and testing machines in various sectors such as manufacturing. Isaac Sim, Nvidia’s robot simulation platform, will soon be available in the cloud, the company said. And Nvidia’s Jetson family of system-on-modules is expanding with Jetson Orin Nano, a system designed for low-power robots.
Launched in open beta last June, Isaac Sim allows designers to simulate robots interacting with real-world mockups (think digital re-creations of warehouses and factory floors). Users can generate data sets from simulated sensors to train the models on real robots, using synthetic data from batches of parallel, unique simulations to improve model performance.
It’s not necessarily just marketing boomer. Some research suggests that synthetic data has the potential to address many of the development challenges faced by companies trying to operationalize AI. MIT researchers recently found a way to classify images using synthetic dataand almost every major autonomous vehicle company uses simulation data to complement the real-world data they collect from cars on the road.
Nvidia says the upcoming release of Isaac Sim — which is available on AWS RoboMaker and Nvidia NGC, from which it can be deployed in any public cloud, and soon on Nvidia’s Omniverse Cloud platform — will include real-time fleet task assignment and route planning engine, Nvidia cuOpt , for optimizing robot path planning.
“With Isaac Sim in the cloud… teams can be located around the world while sharing a virtual world where they can simulate and train robots,” Nvidia senior product marketing manager Gerard Andrews wrote in a blog post. “By running Isaac Sim in the cloud, developers are no longer tied to a powerful workstation to run simulations. Any device can set up, manage and view the results of simulations.”
Jetson Orin Nano
In March, Nvidia introduced Jetson Orin, the next generation of the company’s Arm-based, single-board PCs for edge computing use. First in line was the Jetson AGX Orin, and Orin Nano is expanding the portfolio with more affordable configurations.
The aforementioned Orin Nano delivers up to 40 trillion operations per second (TOPS) — the number of computing operations the chip can handle at 100% usage — in the smallest Jetson form factor yet. It sits at the entry-level end of the Jetson family, which now includes six Orin-based manufacturing modules intended for a range of robotics and local, offline computing applications.
The Orin Nano comes in modules compatible with Nvidia’s previously announced Orin NX and supports AI application pipelines with Ampere architecture GPU – Ampere is the GPU architecture Nvidia launched in 2020. Two versions will be available in January starting at $199: the Orin Nano 8GB, which delivers up to 40 TOPS with configurable power from 7W to 15W, and the Orin Nano 4GB, which reaches up to 20 TOPS with power options from just 5W to 10W.
“More than 1,000 customers and 150 partners have embraced Jetson AGX Orin since Nvidia announced availability just six months ago, and Orin Nano will significantly expand this adoption,” Nvidia VP of embedded and edge computing Deepu Talla said in a statement. (Compared to the Orin Nano, the Jetson AGX Orin costs over a thousand dollars — needless to say, a substantial delta.) “With an order of magnitude increase in performance for millions of edge AI and [robotics] developers Jetson Orin Nano are setting a new standard for edge AI and entry-level robotics.”