TS3 Technologies Blog
Controversial Uses of AI Technology in Society
Technology works wonders for business, but it also enables other organizations, like law enforcement. We aren’t here to argue ethics, but we would like to touch on some of the technology that certain agencies are using in the execution of their jobs. Specifically, we want to highlight the issues involving the very sophisticated AI and data-mining platforms, such as those developed by Palantir.
More specifically, we’re looking at systems like ImmigrationOS, ICM (Investigative Case Management), and FALCON, and how they all collect and aggregate data from a wide range of sources, from government systems to commercial data brokers. These systems use AI to analyze data, look for patterns, and make connections, with the goal of giving agents potential leads.
Should Efficiency Get In the Way of Due Process?
AI is a tool that helps law enforcement operate, allowing them to analyze data at a speed and scale impossible for humans alone. This allows organizations to identify those with criminal histories and those who might have a higher risk level. Whether or not this value comes at the cost of fundamental civil rights and due process is what’s up for debate, however.
Here are some of the questions that surface when AI is involved in these conversations:
- Algorithmic bias - Are AI algorithms biased? There is an excellent chance they are, especially if these systems run on historical data that is influenced by societal biases and discriminatory patterns. This has been seen already. For instance, AI tools that sift through thousands of resumes to prioritize candidates have been known to sneak algorithmic bias into the mix, filtering out otherwise strong candidates due to gender and other discriminatory patterns.
- Lack of transparency - How is risk calculated? What data points are weighed heavier than others, and how transparent is the algorithm? It’s hard to say whether these systems are fair or accurate when proprietary information isn’t disclosed to the public.
- Privacy erosion - Considering how much personal data gets taken out of these systems, it’s no wonder that privacy is a major concern for everyone (or should be). Civilians who find themselves using public services could have an entire profile created for use against them.
- Due process concerns - There are concerns over how to challenge the fact that an algorithm claims that you’re worth investigating. AI could potentially make it so that individuals have no recourse when they are denied fair treatment.
- Guilt by association - While these systems can find connections, those connections might not be worth flagging. AI can come to the conclusion that someone is suspicious because of a distant relative or shared address history, neither of which are necessarily cause for concern.
Emerging Technologies Further Complicate the Matter
Data mining is of the least concern, though. There are other emerging technologies used that are controversial and intrusive:
Commercial Spyware
A new tool available to certain law enforcement consists of commercial spyware tools that can help them tap into mobile phones, crack encrypted communications, and keep tabs on users’ digital activity. There are legitimate concerns that some individuals or agencies could abuse these tools against the press, activists, asylum seekers, and otherwise innocent civilians.
Facial Recognition
This technology has become more broadly used by law enforcement, which has raised concerns over its use for mass surveillance and its impact on privacy.
Demanding Accountability
Despite the benefits, most people feel accountability has to be a priority for those who use AI technologies in this way. It’s not an issue of technology; it’s one of ethics and human rights.
Here are some of the ways accountability can be achieved:
- Transparency mandates - Governments, agencies, and others can share what data they collect, how it’s used, and how algorithms come to their decisions.
- Stronger privacy protections - Comprehensive privacy laws would help place a cap on what the government can and cannot access from third parties.
- Moratoriums on risky technology - These would stop the use of certain high-risk AI models until their human rights implications are addressed, or at the very least, fully understood.
So, how will law enforcement agencies and other companies using AI and technology in this way respond to these challenges over time? We’ll just have to wait and see. In the meantime, it’s certainly worth discussing and monitoring.
When it comes to your organization, getting ahead of this is going to be important in the months and years to come. TS3 Technologies can help you establish an AI Policy that includes cybersecurity and ethical standards for using this novel technology in your workplace.
Give us a call at (205) 208-0340 if you want to discuss this further.



Comments