>> Mark Boyden: All right, my friends, welcome to Knowbility's Be a Digital Ally.

This month we are… going to see a UT Austin Eco Cars vision for inclusive mobility.

what they call alert drive. Thank you for joining us. I'm going to go through a couple of quick things about Knowbility and this program.

So 1st of all, Digital Ally. Our goals are to cover the basic skills and principles behind accessible digital design and to make digital content accessible to people with disabilities. Our audience is you content creators of any skill level and those.

Especially a little bit those newer to accessibility. Nobility is an award-winning leader in digital accessibility. Our mission is to make an inclusive digital world for people with disabilities.

We were founded in 1999, so that makes us just over 27 years old. We are a 501c3 nonprofit based right here in Austin, Texas, and we serve a global audience.

Oops, one too many. So one of the stalwarts of Knowbility is our various community programs. We have what's called AIR, the Accessibility Internet Rally, where we bring together, it's kind of like a hackathon. We bring together web developers who want to learn how to build accessible websites.

And we pair them up with nonprofits who need sites, and then a fun filled competition of about 8 weeks.

We… they build a website and learn about accessibility, and then we give out awards after we judge them. And that starts in June and rolls through the fall.

AccessU. That's our conference. Tell you about that more. It's coming up in May. AccessWorks is a program where we hire people with disabilities to do real world user testing.

for clients who need that type of testing. We have a program for education, K-12, actually kind of K-16 now too, because we're working with universities as well to help them build digital accessibility programs within their learning experiences. And then this program, Be a Digital Ally.

I'm getting a bounce there. All right. John Slayton, XSU. This is another one of our hallmarks. It's our annual training conference, very deep hands-on training throughout the 4 days of.

instruction. It is a hybrid event, online and live in person in Austin, Texas, at the St. Edwards University. For more information, you can visit nobility.

I'm sorry, that's nobility.org slash XSU. We also provide some services which help us beyond the donations that we receive and help ingrain accessibility within our organization. So we provide accessibility testing and auditing for both websites and smartphone apps.

We will work with teams to build leadership and provide leadership and strategic consulting to help them build training programs and in-house teams.

AccessWorks, I told you about it. It's also a service to clients. And then we have an accessibility help desk for people who want to be able to call us periodically can get assistance and buy blocks of hours to get that type of assistance.

All right. Finally, for today. for the Q&A. We're going to hold our questions till the end. So we're going to do a presentation a little over half an hour about this program, and you can put your questions in Q&A, which are found at the bottom of your screen in the menu items.

Um, and later on, if you want, when we get to the Q&A time, if you'd like to ask in person, you can raise your hand, and I can ask you to be able to turn on your microphone and ask in person. You can also type them in the chat. I'll be monitoring it, but it's preferred to use the Q&A if you can.

So now, without delay, I'm going to turn it over to our team who will introduce themselves to you and tell you a little bit about this program that seems very exciting to me.

Take it away, gents.

Farhaan Shroff: Can everyone see the screen? All right. Hello, everyone. Welcome to our presentation today on alert drive. We are part of University of Texas at Austin's EcoCar team. We are extremely grateful for Knowbility, giving us the privilege to present as part of their be a digital allies.

Be a Digital Ally seminar for March 2026.

So let's first start off with who we are, what is EcoCore, how's Nobility involved, then we'll segue into mapping the accessibility gap, what Alert Drive is, and how it works. The inclusive design process, like how we plan to use stakeholder feedback, how do we plan to finalize.

Our minimum valuable product. Looking forward at what is to come for this product, and any questions or answers will be saved for the end.

So first off, let's meet the presenters today. I'm Farhaan Shroff. I'm an undergraduate mechanical engineer at the University of Texas at Austin. Hook'em Horns. I am the mobility challenge lead for UT EcoCar. I am joined today with…

Aniketh Subramanian: And my name is Aniketh Subramanian. I am also a mechanical engineering major at the University of Texas at Austin, and I am currently the backend development lead for AlertDrive.

Vincent Wu: And my name is Vincent Wu, I'm an electrical and computer engineer at the University of Texas at Austin, and I am the front-end development lead for UT EcoCar.

Farhaan Shroff: So what is the EcoCar EV Challenge? The EcoCar EV Challenge is North America's premier collegiate engineering competition that tasks 15 universities with engineering connected autonomous systems for the 2023 Cadillac Lyriq.

Essentially, we're making a self-driving car. My team focuses on accessibility technology. Other teams will focus on the motor, will focus on one pedal drive, connected vehicle systems, and etc.

We focus on bridging real world transportation, equity and accessibility gaps through user-centered design.

We are graciously sponsored by the U.S. Department of Energy, General Motors, and MathWorks.

This is our team at UT EcoCard that focuses just on the development of Alert Drive.

So at UT EcoCar, we have named our Cadillac Lyric Leyva, which is Longhorn Electric Vehicle for all. And as part of the Mobility Challenge, we really, really focus on the for all aspects.

Especially the all people aspect. As we'll see throughout this presentation, the automotive industry has a lot of accessibility gaps when it comes to people with hearing impairments, and AlertDrive aims to help those with those conditions.

So how is nobility involved? We're united by the mission to build inclusive, accessible technology. We utilize the AccessWorks Network to drive user-centric design through feedback collection.

Lastly, we host outreach events like this. We want to give a big thank you to Knowbility for their help over these two years in creating Alert Drive.

So whenever you're given a problem engineering, you always want to start off with the engineering design process. This simple diagram is how we structured our entire approach to alert drive. We first start off by identifying a problem.

Gathering information, identifying possible solutions, then creating our prototype, and right now we're in the iteration section.

or steps five and six. You will notice how five and six kind of have this loop between them. That is the iteration cycle. We're essentially always collecting feedback to improve our design. And then at competition, we communicate our design.

to industry judges. And from there, they identify more problems, and the cycle continues. We will use the engineering design process as kind of a way to show.

How we develop this product through our presentation.

So first, let's map this accessibility gap. Modern driving relies heavily on complex auditory processing. Sirens, honks, reversing sounds. Those are all indicators of potential danger that rely a lot on the sense of hearing.

Current vehicle interfaces lack integrated visual or haptic equivalents for safety alerts. Think of your seatbelt chime, right? That's purely an audible alert telling you that you haven't fastened your seatbelt.

Same as lane departure warning, often defaults to an audio only chime.

The scale of this gap. 50 million plus Americans suffer from hearing loss, and it's the third most common chronic condition after arthritis and heart disease. There's a 28% increase from 1990 to 2019, affecting.

Now, 72 million Americans, 2.5 people globally are projected to have hearing impairments by 2050. Our stakeholders are an enormous share of the driving public, and the vehicle industry has largely left them to adapt on their own.

So who is alert drive designed for? Texas drivers with moderate hearing loss and above.

They are required to have hearing aids and outside rearview mirrors, but those don't do nearly enough, as many still struggle to detect honks and sirens.

Texas law has no defined decibel threshold for hearing impairment in driving.

This means that there are minimal requirements to alert insurers or to receive medical checkups for driving abilities and develop systems to help those with hearing impairments.

So last year, we surveyed deaf and hearing-impaired drivers, and we found that 40% of them rated a 5 out of 5 interest in a system that detects honks.

Multiple people reported near misses, leading to almost accidents.

Accuracy first. A lot of our survey respondents reported wanting to the development of accuracy before implementing new features.

This is all done through the AccessWorks database.

So now that we have identified a problem in the automotive industry, we sought to gather information. We will now segue into identifying possible solutions and creating a prototype.

So the prototype that we developed was Alert Drive.

AlertDrive is an acoustic-based driver assistance system for hearing impaired drivers to detect honks. We use machine learning algorithms to detect honks on the road. Essentially, there's a microphone mounted on the car that is constantly hearing for any honk.

And if the computer hears the honk, it'll display that to the driver.

We've added a haptic pad for additional alert due to feedback we've collected.

So now I'll hand it off to Vincent to talk about the user experience.

Vincent Wu: Awesome. As mentioned, my name is Vincent, and I am the front-end lead for UT EcoCore. And now I'll introduce to y'all our cabin features. So this is essentially what the drivers would knows as most different from when they're driving.

And a card that has alert drive. So the biggest feature is the HMI display. So this is essentially a visual indication that helps users get a bird's eye view of the car.

And compared to their surroundings of if there's a honk going by, and if there is one detected with our microphones, there'll be red marks to indicate on that bird's eye view to give drivers a better idea of what's going on within their surroundings.

In addition, we have our haptics field feature where drivers will have us haptic seat that will essentially vibrate to give them another sense of alert that they should be aware of some traffic that is trying to signal them.

And finally, we added some more customizations, so we can try to appease as many of the drivers, because not everyone might want the haptic feature or.

They prefer different controls.

This is a setup for what the driver would see. So, as you notice that our visual alert display is essentially a small tablet that is mounted on the console. Now, you might be thinking that with many of the modern cars being built.

Many of them already have, like, a 13 or 7 inch touchscreen tablet within their console where they can, like, put, like, Apple Play or other different features like that. However, due to General Motors' policies for the.

UT Eco card competition. We're not allowed to fiddle around with those technologies, or, like, the dashboard that you see. So essentially, we decide to create to mount an external display.

They'll display our front end for the bird's eye view.

And in addition, we have our haptics pad that will be on the driver's seat. As mentioned before, to give drivers a better sense of whenever any alerts or notifications needs to be sent to the drivers.

Now, let's dig more into our amazing front end. Now, when a driver is busy looking at the road, we want them to have access to a.

Similar to when you're parallel parking, or just parking somewhere, and you don't want to be too close to the car next to you. So this is a radar style bird's eye view, a 360 degree LED direction control where the data is input through the array microphones.

And as you can see here, this demo shows us that a card in the left back of the car, of the vehicle is honking at them. So essentially, with the haptics touch on, as indicated, the user will not only.

to be given this visual indication, but the vehicle will also… the haptics pad will also vibrate, letting them know that some traffic is coming, and ultimately, this would lead the driver to maybe check their blind spots and be more aware of that specific area of the vehicle.

Now, let's dig more into how our whole front end works.

So, we used a Raspberry Pi, which is essentially a microcontroller. So you can think of this as a tiny green card deck size pocket computer that's essentially the brain of our entire front end.

So this might look really small, but it has many of the capable features as, like, a laptop you're probably using right now to tune into this webinar.

So this computer has many of the out of the ports as you see on the top, like the GPIOs, pins, those are essentially like the connectors, like you're connecting a USB-C to your computer to charge your computer. I mean, to charge your phone, stuff like that. So those are the plugs.

Where the data will transfer from the… from our backend into the Raspberry Pi. And the Raspberry Pi is basically the brain, so it'll listen to these requests, and then it'll decide what to happen next. So, ideally, the data will go through the pins.

And then it will output through the HDMI cable onto the display.

Next, we have the control panel, which is done through the breadboard and a switch. So you can think of a breadboard as a testing mat that it's essentially allowing us to plug in cores together or LEDs without using glue or soldering.

This is important for testing because it's way more feasible, it's cheaper, and it's so much faster to test, as opposed to buying the switches and soldering them and all that stuff.

So, as you can see here, we used a switch here to simulate the the microphones getting signal as in like someone's talking you. So from there, this… the switch will send a message to Raspberry Pi. It'll be saying, like, hello, like.

someone talking, you're like, so that's what usually what happens.

And then ultimately that signal can be customizable because it's like a whole breadboard. So not only are we limited to just one switch, we can add multiples to truly simulate what's going on in the entire system design.

And lastly, the output is what the driver will see. As you can see here, we have a monitor which is connected through that HDMI cable as mentioned.

Where it will show us the front-end display, so it'll show that radar, the display demo you saw previously, where the user can press a button, and they'll be able to see the changes of their, like, the safety through the bird's eye view.

In addition, we have an LED light that will help indicate whenever a honk is detected. So ideally for this testing purpose, whenever the switch is pressed, the LED will be on to simulate someone's behind you or someone's trying to.

communicate to the driver. So that information will be transferred also to the front end display where the driver will see the message and indicate that someone is trying to signal to them.

And now let's put all of these together. So far, we were able to test the logic of the alert indicator, which is essentially ensuring that the algorithms and all the logics work, like the switch pushed down will indicate that the LED turns on, and if the switch is released, the LED is off, stuff like these basic logic.

In addition, we ensure that the data transfer gets through from the breadboard to the Raspberry Pi to the front end.

And lastly, we ensured that the LED is connected to the correct toggle, so it will stay accurate with whenever the switch is pressed. In terms of our next steps, we'll be connecting to Haptic-C.

with the power converter. So not only will the… not only will the LED turn on, but the haptic seat will also vibrate.

And also we'll finally replace the switch input with the backend toggle. So rather moving from the testing purpose, we'll actually be implementing the backend, the backend's microphone arrays input and the replacement of the switch to actually simulate what's going on.

Aniketh Subramanian: Sweet! Thank you so much, Vincent, for explaining the full front end architecture of Alertrive. And for a review, there are 2 parts to alert drive, right? You have the front end, which is what the user interacts with, and you have the back end where all of the detection and all of the essentially processing happens.

I lead the backend development, which means I do a lot of software development, and this is the current system architecture. Now, there is a lot going on, but there are essentially four main components that everyone should keep into their mind.

whenever I explain them in the next couple slides. So first off, you have your microphone array. You have your Jetson Orin, you have your Raspberry Pi, and then you have your visuals and your haptics. So as long as you know these four components, the essentially the entire system will fall into its place.

So I like to break down the system into more digestible way, and I like to think about it like the human body. So, to start off, I like to think of the microphones as the ears of the system.

Uh, I like to think of the Jetson Orn as the brain.

The wires and robot operating system as the nerves, and finally, the visuals and haptics as the mouth, as the way to communicate back to the driver.

So, to start off the analogy, I like to think of the microphones as mapping to the ears. Currently, we use something called the UMA DSP microphone, which is a microphone that actually contains eight smaller microphones arranged in a circle.

This is called a microphone array. It continuously samples sounds around the car at almost 44,000 times per second, and this microphone is specifically used in spatial audio capture, which is exactly what we're trying to do for alert drive.

So not only do we need these microphones to capture sound around the car, we need to see which sounds hit which microphone first. Therefore, we can calculate which direction it's coming from.

Um, and the analogy I like to make, right, is, like, just like your two ears try to figure out a sound from the direction and what it is, the microphones are used to analyze the frequencies of the sounds and the direction.

Next, I'd like to talk about the orange. So, this is a super, super cool computer. So, the Orin is essentially something called the Embedded AI Computer, which is actually what a lot of large language models use to train. So things like ChatGPT and Claude.

And every single one of these chatbots, they actually use thousands of these computers to train their models on. So this thing is super, super powerful, and it's only about the size of a Rubik's Cube, but it actually runs our entire software on it. So including its classification and localization.

Now, our alert drive has two main backend modes, right? You have classification and localization. Classification is how is the system detecting, is there a honk or not within our audio sample and localization is.

Okay, if there is a honk, where is it coming from? Essentially, our horn is able to do all of that in almost human reaction time. And to give you a sense of how powerful this computer is, it can compute almost.

Next, I like to think about the wires and ROS as nerves. So, to start off with the wires, right? A wired connections are essentially how data is transferred between all of the components I just talked about.

There's no way your microphones and the Orin and the HMI can talk to each other without some sort of data being transferred through connections, right? Which is wires. However, I like to talk a little bit more about ROS. So, ROS is something called Robot Operating System, and think of it like the wires inside of your system, right? So they're not physical wires, but they're, like.

Virtual wires in a way. And essentially, Ross is this platform created by a bunch of nerds from MIT, like, 10 years ago, and it helps connect a lot of software components in projects like these, where there's a lot of different things happening.

So, wires are essentially the hardware, and ROS is essentially the software. And the analogy I like to make is your nervous system essentially carries signals from your ears to your brain and from your brain to your mouth.

And the same way connects from your microphone to or in the Oran to the HMI.

And lastly, the visuals and haptics, I like to think of as the mouth, right? So like Vincent explained before, the HMI is a small display mounted in the driver's vision that shows an arrow to the Honks direction, and the haptic pad also is a way to.

alert the driver in case of any sort of honk. And the analogy is, right, your brain processes a sound, and your mouth, right, or even body language, likes to communicate it, saying, hey, look out, there's something happening, or there is a danger occurring.

And so lastly, I wanted to put it all together in a bit of a scenario. So imagine that a car honks 20 feet to your left. So with alert drive within 300 milliseconds.

a microphone hears it. The microphone sends that signal to the Jetson, which classifies it as a honk and calculates where it's coming from.

And from there, the wires are able to carry that decision to the HMI, where the HMI displays the arrow along with the haptic vibration.

Farhaan Shroff: Sorry about that, I didn't know I was muted.

So now that we've identified a possible solution to the problem, and we've created a prototype, we now want to segue on to evaluating or testing this product and refining it.

That brings in the inclusive design process. Originally, we planned to only do visual alerts.

However, through surveys, through the access networks database, we conclude that 80% of respondents wanted haptics.

So then, we decided to add as our new feature, designing a haptic pad to fit over the seat. One thing that we always, always have to consider is the fact that not everyone is the same. Not everyone may like where the haptic pad is placed along their back. So we have to make sure it's easy to remove.

and adjust, right? Because some people may not want it as well, right? 20% of people didn't want it. It should be easy for them to take it off as well.

Users will be able to control the intensity of the vibrations, and this allows the driver to look at the road instead of the tablet. That was one of the feedbacks that we kind of got from our surveys, where people are like, do you expect me to look at the tablet while I'm driving? Because I'm supposed to be focused on the road.

A haptic pad would help me focus on the road, and also have some notification of a danger. That was our big aha moment.

So our user interface. Our original design was very cluttered.

Therefore, through some feedback we collected, we realized that our final design must be simplistic and clean. The big, big feedback we got was the removal of these dashed lines. People hated it.

So we decided to go for a more radar-like approach.

We place the icons for zone detection, so you can easily see it while you're driving, and also know where to turn your head.

High contrast for easy readability as well, because as you're driving, this old display wasn't really contrasted, the colors weren't contrasting that well, so it's kind of very hard to see the notifications.

Customization. We got a lot that we want the system to be customizable. So the original design did not have much customization. You couldn't change the color of the car. You couldn't change the notification area color. So we added those features.

We also added adjustable haptic intensity and a quick on or off switch, depending on how you're feeling.

We also added dark mode for night driving because it is eye-searing to see a white screen when you're driving at night. So therefore, we added dark mode for it as well.

So now that we've talked about evaluating and testing our design and refining it based on stakeholder feedback, we now are going to the communication and possible identification of new problems within this accessibility gap.

So, looking forward…

So, finalizing the minimum viable product or MVP. This is pretty much saying like version one kind of like your iPhone one, right?

So we're conducting final testing with integrated haptic pads. We have to collect feedback on the UI and haptics and see, is this up to the standards that our users want?

Then we're going to refine the directionality algorithm for better accuracy.

And we also have to do consideration of the Doppler effect, right? We have to test this on the car while it's moving.

Uh, as we said previously, the EcoCore EV Challenge tasks students to pretty much… we're given a shell of a car, and there's a team dedicated to getting the motor logic working and integrating all the systems and everything, so the car has just started running, so now we can do testing.

With the car running to test for Doppler effect. Essentially, the Doppler effect is that when you have a moving object that is producing sound, the back of the object, so against the movement, has a lower frequency compared to in the direction of movement, which has a.

higher frequency of that sound. That change can affect how the machine learning algorithm can detect honks.

While the car is moving. Finally, we have to implement this design on the car.

So let's look at the Year 4 competition and beyond. The EcoCar year four competition will occur in Michigan in mid-May. We will be presenting our designs to judges from the automotive industry. This is the final year of the current design cycle, unfortunately.

Last year at year 3 competition, me and the former mobility challenge lead, Hamza, won first place for our presentation. So we hope to be able to repeat this award at this year's competition as well.

Hopefully, however, after this competition, further research can be done to develop a fully integrated system. Unfortunately, at the end of this competition, after mid after May, our funding for this project will be lapsed.

And we'll be looking for kind of ways to get more funding to do more research on this project.

So now that we've communicated our results, and we've identified future problems that may occur.

We will… we can now gather new information and go through this engineering design cycle all over.

We would like you all to connect with us. You can see us on Instagram at UTexas EcoCar. You can check out our website at ecocar-utaustin.org, or you can send us an email at eco car@utexas.edu.

We hope you all enjoyed this presentation today. We're going to leave the floor open for any questions. I believe we already have some popping in the chat and some in the questions and answers section. So.

Let's get started with those.

Mark Boyden: Thanks. Yeah, just as a quick reminder. If you want, you can put your question into the Q. And a. It's one of the options at the bottom of your.

screen. You can also raise your hand if you would like to ask your question live, and we will let you do that and invite you to do that. So let's see what we do have so far from Matt.

Matt is curious if it would be better to make this available to all drivers instead of targeted towards deaf and hard of hearing drivers. The side mirror was originally only for deaf drivers, but it became standard on all vehicles because others also benefited from the safety feature.

I could see this being a feature that benefits all drivers and incorporate it as part of the core dashboard rather than separate.

Farhaan Shroff: Yeah, so… unfortunately, as part of the EO car EV challenge, General Motors does not let us mess with their like core dashboard, as that is their proprietary code and information. Therefore, we had to do an external display for it.

But yeah, looking forward, we really hope that auto manufacturers see our design and implement this on all cars. In fact, when we presented this to judges last year at competition, we actually got some feedback from judges that said, hey.

is a good feature if I'm driving and I've got music blaring really loud, I might not be able to hear a honk, and this might help me potentially there as well. So while, yes, this is originally designed for targeting towards deaf and hard of hearing drivers, hopefully this can expand to encompass all drivers and becomes like a standard safety feature in your car, like.

A rearview camera or blind spot monitoring.

Aniketh Subramanian: Yeah, and honestly, I'd like to add to that, um, a lot of self-driving vehicles, a lot of the things that surround them right now is safety, right? You really want your self-driving car to know what's going on on the road at all times, but current technology for self-driving is mainly camera vision.

So, let's say something like a crash, or, you know, if you're driving near mountains, like a boulder or something falls, like, that's… you're not able to see, but you can hear. That's a lapse in safety for the self-driving. So this could also be something that could benefit.

a lot of these autonomous vehicles in the future, which is something that we also want to hopefully target.

Mark Boyden: So sort of along that same line, then Matt asks another question. Will it also tell the driver if they use their own horn? Sometimes I use my horn, and I don't always know if I pushed it hard enough for people to hear.

It kind of follows on to some of your stereo aspect, too, although I assume if you're not necessarily listening to a stereo inside your car, if.

Um, you need this equipment.

Mark Boyden: Wait, uh, Vincent, did you say you wanted?

Vincent Wu: Yeah, um, I mean, essentially, because our, um… ideally, if the person were to honk their own horn, and given that the microphone is literally, like, right on top of the vehicle, that would be one of the indications, like, it would probably like.

They like going back to the radar, it'll probably show like all red, given that the microphone's, like, directly right above from the… from the vehicle. But I think Anna could probably explain more on this.

Feature.

Aniketh Subramanian: Yeah, and the thing is, it could be possible to tell if they use their own horn. I think in this situation, since the horn is near the hood, it would show that there would be, like, an alert in the front, so wherever the in the UI would show in the front. But there's also a way if… Hopefully, we integrate this inside the car zone thing. Maybe if there's a way to have this as an option of showing, oh, you pressed your horn on the UI, hopefully in the future that's something that we could integrate.

Farhaan Shroff: Yeah, we also want to, uh… Mention again that this is part of a competition. We were given two years to design this project, so there are a lot of features that we really, really want to implement as well, such as this one. We just really don't have time for, or current technology limits us, like we had mentioned before.

General Motors is very strict on what we cannot and cannot use in the car. Like, we cannot touch their own the GM computer that processes like the honks and stuff.

Hopefully in the future we can partner more closely with an automaker or the Department of Energy to kind of develop the system to be like fully integrated into cars. As for letting you know, use your own horn.

Like Anikith had mentioned, it will probably light up in the front. We're going to do more field testing when we come back from spring break next week.

Mark Boyden: All right, we're going to go to Julianne, who's got her hand up.

Julianne should be getting a. unmute request.

Julianne: Can you hear me? Okay.

Mark Boyden: Now we can.

Team: Yes.

Yeah, we can hear you.

Mm-hmm.

Julianne: Can you hear me now? All right. Sorry, my system set up. Love, love, love the haptics. Um, questions that pop to mind, or first points that popped to mind is this is great for new drivers, young people that are learning to drive. It helps them keep their eyes on the road, and maybe just a quick glance at the screen to see where that sound is coming from.

to, uh, you know, deaf and hard of hearing, of course, hugely, and then aging populations that are starting to lose their sound differentiation, and especially high sounds is one of the frequencies that tend to go for aging populations. My question has to do with, does.

Do you know if the system has a way to differentiate directionality sound, say, when you're sitting next to a car that's got their hard metal music pumping and thumping next to you. You're sitting at a light, you're in stop and go traffic, or you're on the road, and their card is vibrating your entire car.

Is the system able to differentiate and tune that out to be able to select those other key sounds that you do want to identify for people that absolutely need that that haptic notification and that visual notification.

Aniketh Subramanian: Yeah, no, that's a great question, and that's actually something that we were given advice on during last year's presentation. So currently we use a machine learning model that analyzes the sounds from all over the car, and.

One of the things we've done is we've taken, in order for us to train our model, we've actually taken Honks and layered it on to other things. So we've actually had one where we have babies crying.

Um, and then we layered, like, a honk onto it to see if the machine learning model could extract the fact that there's a honk.

And even other things we've layered onto. So our model has essentially been trained to be super robust when it comes to extracting the HONK profile out of even a bunch of different loud noises. So it's something that we've explored, and again, it's something that we would need a little bit more time to to make it completely robust.

Currently, our accuracy sits at around 90%. And but the entire thing is, you know, this feature, we should want it to be almost 100%. You know, this is like a very, very important safety feature. So it's something that we want to keep iterating on, but something that we've definitely explored.

Jullianne: Yeah, so my suggestion is include deep heavy bass, you know, those guys that soup up their cars? My son used to do that deep heavy bass.

Team: Yeah. Yeah, yeah.

Aniketh Subramanian: Mm-hmm. Yeah, like the motorcycles.

Julianne: Yeah. Yeah. Yeah.

Aniketh Subramanian: Yeah. Yeah. Yeah, yeah. So.

Julianne: Yeah, so yeah, definitely sample that, because it's like, oh my god, I'm feeling that my whole body, can I differentiate the haptics I'm feeling, and can it differentiate the sounds to let me know when a siren's coming by, when I've got this kind of sound next to me overwhelming my sensory system? So, cool! Very… this is…

Hmm.

Farhaan Shroff: Yeah, so one feature that we wanted to implement, but unfortunately, we're running out of time to implement, was ambient lighting as well that can be triggered by the notification as well. But unfortunately, like we had said, we have a very finite timeline. If we maybe even had, like, 2 months more.

We could have implemented it, uh, but yeah, so the haptics, in regards to that, uh, our haptic pad is going to be very localized. It's pretty much going to be on the backrest, and it's going to be, like… I believe we're going to put…

Yeah, but thank you. Um…

Mark Boyden: Thanks. We don't have any other questions yet. Are there any other questions? Again, you raise your hand if you'd like to ask live like Julianne, or you can put it in the Q&A. I don't see anything in the chat either.

Give me just a second to.

gentlemen, it appears that that's all the questions I have for now. But they did give you their contact information. So if something comes up, you can certainly go ahead and.

Ask them, and we appreciate you guys being here. You can stay connected with Knowbility by signing up for our newsletter at nobility.org slash subscribe. You can follow us on social media at nobility, and you can email us events at nobility.org.

And finally, a quick reminder about our upcoming conference. That is what we're going to focus next month's Be a Digital Ally on is a precursor to this with samples from several of the instructors that will be there. So keep an eye out for that. It'll be coming through our email newsletter list as well.

And we thank you for being here. We will be following up with a short survey, and we would greatly appreciate if you gave us feedback. That's how we make this program better for everybody, as well as we get some topics from you.

to bring back to you. Thanks again, and we'll see you all next time.

Julianne: Guys, thanks for this presentation. This is exciting research. I'm so excited to see this happening. Yay!

Team: Thank you so much.

Thank you so much.

Thank you, thank you, thank you. Yes, and always remember, guys…

Yes.

Mark Boyden: You know, the engineer, the engineer and me UT mechanical in 1989, by the way, also is extremely excited. I remember some of the projects that I got to work on in engineering world back in those days. So thanks again for being here. We really appreciate y'all.

Farhaan Shroff: Thank you, guys.

Aniketh Subramanian: Thank you so much.

Vincent Wu: Thank you.

Julianne: Thank you all and thank you, Mark.

Mark Boyden: Oh, good. And thank them for showing up during their spring break.

Farhaan Shroff: No worries.

Julianne: Yes, definitely, definitely. Thank you guys so much. What fun.

Mark Boyden: Thanks, guys. Bye-bye.

Farhaan Shroff: Yes, we're… we're always grateful for Knowbility for their everlasting support of this project, and we thank them for their collaboration, advice they've given us throughout this entire design cycle. Without them, this project would have gone nowhere.

So thank you, Knowbility.