Meet Google Analytics 4: Google’s vision for the future of analytics
Microsoft and Team Gleason, the nonprofit organization founded by NFL player Steve Gleason, today launched Project Insight to create an open dataset of facial imagery of people with amyotrophic lateral sclerosis (ALS). The organizations hope to foster innovation in computer vision and broaden the potential for connectivity and communication for people with accessibility challenges.
Microsoft and Team Gleason assert that existing machine learning datasets don’t represent the diversity of people with ALS, a condition that affects as many as 30,000 people in the U.S. This results in issues accurately identifying people, due to breathing masks, droopy eyelids, watery eyes, and dry eyes from medications that control excessive saliva.
Project Insight will investigate how to use data and AI with the front-facing camera already present in many assistive devices to predict where a person is looking on a screen. Team Gleason will work with Microsoft’s Health Next Enable team to gather images of people with ALS looking at their computer so it can train AI models more inclusively. (Microsoft’s Health Next team, which is within its Health AI division, focuses on AI and cloud-based services to improve health outcomes.) Participants will be given a brief medical history questionnaire and be prompted through an app to submit images of themselves using their computer.
“ALS progression can be as diverse as the individuals themselves,” Team Gleason chief impact officer Blair Casey said. “So accessing computers and communication devices should not be a one-size-fits-all. We will capture as much information as possible from 100 people living with ALS so we can develop tools for all to effectively use.”
Microsoft and Team Gleason estimate that the project will collect and share 5TB of anonymized data with researchers on data science platforms like Kaggle and GitHub.
“There is a significant lack of disability-specific data that is
A camera or a computer: How the architecture of new home security vision systems affects choice of memory technology
A long-forecast surge in the number of products based on artificial intelligence (AI) and machine learning (ML) technologies is beginning to reach mainstream consumer markets.
It is true that research and development teams have found that, in some applications such as autonomous driving, the innate skill and judgement of a human is difficult, or perhaps even impossible, for a machine to learn. But while in some areas the hype around AI has run ahead of the reality, with less fanfare a number of real products based on ML capabilities are beginning to gain widespread interest from consumers. For instance, intelligent vision-based security and home monitoring systems have great potential: analyst firm Strategy Analytics forecasts growth in the home security camera market of more than 50% in the years between 2019 and 2023, from a market value of US$8 billion to US$13 billion.
The development of intelligent cameras is possible because one of the functions best suited to ML technology is image and scene recognition. Intelligence in home vision systems can be used to: – Detect when an elderly or vulnerable person has fallen to the ground and is potentially injured – Monitor that the breathing of a sleeping baby is normal – Recognise the face of the resident of a home (in the case of a smart doorbell) or a pet (for instance in a smart cat flap), and automatically allow them to enter – Detect suspicious or unrecognised activity outside the home and trigger an intruder alarm
These new intelligent vision systems for the home, based on advanced image signal processors (ISPs), are in effect function-specific computers. The latest products in this category have adopted computer-like architectures which depend for
Imagine being able to embark on a real-time computer vision project in a few hours, with no code to build a traffic control system, a warehouse monitoring system, or an in-store point of sale optimization system. Like the apps that are built on top of smartphone operating systems, these smart computer vision projects can use a multitude of proprietary and vendor algorithms. Because they are built on top of BrainFrame, an operating system for computer vision that comes with a Smart Vision AI Developers Kit, they take a fraction of the time to build than other computer vision projects.
BrainFrame is one of the core products of Aotu.ai, started by two founders, Stephen Li and Alex Thiel. Stephen applied his experience building out the Android operating system to BrainFrame. In collaboration with leading chipmakers such as Intel, Nvidia, etc., BrainFrame is positioning itself to take center stage as more developers rush into the space to experiment with computer vision applications in a variety of industries.
Recently, BrainFrame received the Nvidia Metropolis Certification, and Aotu in partnership with AAEON and Intel, just announced the release of its Smart Vision AI Developers Kit on the Intel AI Platform for IoT.
Stephen Li, CEO and Founder of Aotu.ai says, “Aotu.ai, initially focused on developing robotic solutions. As we completed early robotic projects, we found computer vision was at the heart of what we were building and that you need great performance. We decided to figure out how to achieve this great performance without writing a lot of code and led to the creation of BrainFrame. We then realized the need for a developer’s toolkit to help developers to customize and deploy computer vision projects quickly.”
Exer Labs has raised $2 million in funding and unveiled its AI and computer vision Exer Studio app for the Mac. Exer Studio captures your movements for coaching advice and offers Peloton-style leaderboards for workouts.
The Denver-based fitness startup captures your movements with your laptop’s camera and evaluates your form. It can share your results with friends, fitness coaches, or others to see where you rank on the leaderboards, motivating you to work harder or faster.
CEO Zaw Thet said in an interview with VentureBeat that Exer relies on edge-based AI (meaning it uses your smartphone’s computing power) and computer vision to power its motion coaching platform. It offers real-time audio and visual feedback on almost any type of human motion via a Mac (and its camera), without having a human in the loop. The mission is to help people move, train, and play better. Coaches can use the app for classes and see who needs help.
“Gyms have closed and are having trouble opening back up,” Thet said. “There are more than 300,000 professionals who aren’t able to train people in person. They have switched to streaming workouts, but it’s hard to keep people engaged on Zoom.”
The company has now raised $4.5 million to date. Investors in the latest round include GGV, Jerry Yang’s AME Cloud Ventures, Morado Ventures, Range VC, Service Provider Capital, Shatter Fund, MyFitnessPal cofounders Mike Lee and Albert Lee, and existing investors Signia Venture Partners and former Zynga executive David Ko.
Fitness in the pandemic
Above: Exer Studio can track movements for people doing workouts.
Image Credit: Exer
Thet said the Mac app uses the camera to capture your movements, so it knows how many repetitions you’ve done and whether your form is correct. It compares your results to a Peloton-style leaderboard, and you