Seeing George Floyd have his breath forcibly taken from him in 2020 left me with a strong urge to act. But what could I do? Inside of IBM, the Black community and allies decided to use data and technology to turn the frustration of #BlackLivesMatter into something that can actually make a difference. I felt compelled to join in.
Through a design thinking workshop, I got enthused about the idea of using AI to gather and process the various viewpoints that occur after a crime in a community. The project took shape and became the Incident Accuracy Reporting System (IARS) which allows witnesses and victims to corroborate evidence after a crime. This allows a more complete picture of a crime to be gathered from the whole community, rather than the one that is usually just captured by a police report.
From that original design thinking workshop through development inside of IBM, I was pleased to see the application released as an open source project just over a year ago, opening it to a much bigger base for development and for potential for use. Working on this project has really opened my eyes up to the potential for AI to make a positive impact in our lives in ways that I could not have imagined.
Use of AI to simplify data capture
For a system like this to be successful, people need to be able to easily enter information after a crime occurs. That should be in whatever format makes the most sense. IARS is a mobile app, so thinking through the capabilities of a phone, we needed to allow facts to be captured after a crime via text, pictures, video and audio (voice). The use of AI in the form of the Watson Text to Speech service enables video and voice information to be standardized into a text document: the normal format for a police report, and a useful format for further AI analysis.
Use of AI to find difference of perspective
After an incident occurs, some accounts may converge while others may differ. For IARS, we use AI clustering algorithms to automate comparison of various reports. We arrange topics into clusters that show similarity in viewpoints. We want the AI system to help us find those instances where the perspective differs. We had started with the k-means clustering algorithm but are switching over to DBSCAN so we don’t have to specify the number of clusters we are looking for. These algorithms highlight which reports are outside of the cluster. The difference can suggest anything from a restricted viewpoint of an incident to intentional bias or a cover-up. The potential here is that the application can scale and quickly identify where there is a different perspective that needs to be further looked at.
Scaling the AI across any cloud
We have been using Watson Studio to manage our AI models. This makes it easier for us to switch out algorithms while keeping the core application intact and also gives us the opportunity to be able to scale the application in a cloud environment.
Like any open source project, IARS has evolved over time. Originally there were 12 people working on this, covering the frontend development, setup of the backend and the data science and AI needs. The members involved have changed, based on the degree to which they have availability to work on this project. One key learning for me is the power of diversity: we’ve had team members coming from distant places like the UK and Uruguay, which has really helped me see how approaches to social justice can really differ across the globe.
For more details on the importance and value of using AI for good, visit https://www.ibm.com/artificial-intelligence/ethics.
This post is part of a series during Black History Month covering the relationship between artificial intelligence and social justice.
The post Collecting multiple community viewpoints after a crime appeared first on Journey to AI Blog.