Steering Safer Streets: The Role of AI and Synthetic Data in Understanding Pedestrian Behavior

3 min readJan 16, 2024

Imagine a world where cars drive themselves, smoothly navigating through streets filled with pedestrians. But how do these cars know what a person on the street might do next? This is where the paper “Synthetic Data Generation Framework, Dataset, and Efficient Deep Model for Pedestrian Intention Prediction,” written by Muhammad Naveed Riaz and team, comes into play. Published in the field of self-driving cars, this paper, dated 12 Jan 2024 , tackles the challenging task of figuring out what pedestrians might do next — will they cross the road or not?

This research is quite special because it doesn’t just use real-life examples (from datasets named JAAD and PIE) but also creates its own simulated world with a dataset called PedSynth, made using something called ARCANE. This is like creating a video game world to test how well their system works. PedSynth is pretty vast, covering around 400 places in virtual cities with different weather and light, which makes it really good for testing. They split this virtual world data into parts: 80% for training their system, 10% for checking if it’s learning right, and the last 10% to test how well it does.

Now, how does this system, named PedGNN, work? It looks at pictures of people and understands their shape and movement (they call these ‘pedestrian skeletons’) using smart math models named GNN (Graph Neural Network) and GRU (Gated Recurrent Unit). It’s like teaching a computer to see and understand how people move. To make sure PedGNN learns correctly, they use a smart way of training it with something called the AdamW optimizer, focusing on not just making it right most of the time, but also reliable.

When they tested PedGNN, the results were quite good. They used the real-world data from JAAD and found that PedGNN could predict pedestrian actions with about 85.29% accuracy, which is pretty high. But here’s the cool part: when they trained it with their own PedSynth data, it did even better in some cases. For example, when they mixed PedSynth data with PIE data, the accuracy went up to 74.01%. This shows that their virtual world is really helpful in making the system smarter.

The paper also talks about mixing different types of data together for training. It seems when they combined real-world data with their virtual data, the system learned better than using just one type. This is like learning from both books and real-life experiences — you get the best of both worlds.

They even compared PedGNN to another top-notch model called PedGraph+, and guess what? PedGNN often did better in predicting accurately. Particularly impressive was when PedGNN was trained and tested on PedSynth data, it hit a high score of 92% accuracy.

Another big win for PedGNN is that it’s really fast and doesn’t need a lot of computer power to work. This is super important for using it in real cars on the road. They also showed some examples where PedGNN correctly guessed what people on the street were going to do, proving it can be useful in real life. However, they did mention that they need to keep improving it, especially in tricky situations.

One big thing this research does is help solve a problem with not having enough varied data to train these systems. By using their own PedSynth data along with real-world data, they can train their system in a better way. This is like having a wider range of experiences to learn from.

In short, the paper by Muhammad Naveed Riaz and his team is really important for making self-driving cars safer and more reliable. It shows how using a mix of real and virtual worlds can help teach cars to understand pedestrian behavior better. This research is not just about cars but opens doors for other areas where there’s not enough data to learn from. It’s a step forward in making technology smarter and our roads safer.




A Neurog publication about AI, tech, programming and everything in between.