Toronto, Ontario — In this electric and autonomous vehicle report, Aspen Aerogels, maker of insulating EV battery products receives a $607.6 million dollar loan from the United States Department of Energy to support North American infrastructure; while researchers from North Carolina State University develop a technique to help self-driving vehicles better map 3D spaces using 2D images.
Aerogel aspirations
Aspen Aerogels, which makes insulating materials that can be layered inside EV batteries to prevent electric fires and thermal runaway, has recently received a $607.6 million dollar loan from the United States Department of Energy to build a new factory in Georgia to support the North American electric vehicle market.
According to a report from MIT Technology Review, once the new factory is fully functional, it could supply material for over two million EVs annually.
Aerogels—such as Aspen Aerogels produces—are made with micro pockets of air to help insulate and maintain temperatures. The company has further won research grants from NASA to explore the use of its materials for spacesuits.
While other materials used to prevent thermal runaways can limit the energy efficiency of electric vehicle batteries, the benefit of aerogels, says the MIT Technology Review, is that they are light and so don’t limit energy density.
The company currently makes materials for EV batteries at its factory in Rhode Island.
Supplemental seeing
Researchers at North Carolina State University have developed a technique to allow artificial intelligence programs to more accurately map three-dimensional spaces using two-dimensional images using supplemental processing systems.
Tianfu Wu, an associate professor of electrical and computer engineering at North Carolina State University and corresponding author of a paper on the technique, originally told Tech Xplore that “most autonomous vehicles use powerful A.I. programs called vision transformers to take 2D images from multiple cameras and create a representation of 3D space around the vehicle. Our technique, called Multi-view Attentive Contextualization (MvACon) is a plug-and-play supplement that can be used in conjunction with these existing vision transformer A.I.s to improve their ability to map 3D space.”
The research team tested the technology’s abilities within three leading vision transformers currently on the market, all of which rely on a set of six cameras to collect the 2D images they transform.
Tech Xplore reported that MvACon significantly improved the performance of all three vision transformers.
“Performance was particularly improved when it came to locating objects, as well as the speed and orientation of those objects,” Wu concluded.