Why Using AI to Plan a Crime Is the Dumbest Move Possible

Why Using AI to Plan a Crime Is the Dumbest Move Possible

Criminals have always made mistakes. They leave fingerprints at the scene. They drop a wallet. They get caught on a doorbell camera. But in the modern era, the most significant evidence is often stored in a server farm thousands of miles away. The recent case involving Nicholas Jordan, a student suspect accused of a double murder at the University of Colorado Colorado Springs, highlights a terrifyingly stupid new trend. People are actually using ChatGPT to ask for advice on how to commit crimes.

It sounds like a dark comedy script. It isn't. When law enforcement officers look through a suspect's digital footprint today, they aren't just looking for text messages or Google search histories. They are requesting chat logs from AI companies. This shift represents a massive change in how investigations work, and it proves that your digital history is essentially a permanent confession waiting to be discovered.

The Nicholas Jordan Case And What It Proves

The specific details surrounding the Jordan case are grim. According to the arrest affidavit, investigators found that Jordan allegedly used ChatGPT to ask about body disposal and ways to cover up a murder. This wasn't a hypothetical creative writing exercise. Prosecutors believe it was part of a premeditated effort.

Most people assume that their AI conversations are private. They treat these chatbots like a diary or a trusted confidant. That is a dangerous delusion. You are not talking to a person. You are sending data to a private corporation. That corporation has a legal department, a data retention policy, and a mechanism to comply with subpoenas.

When you type into that prompt box, you are creating a record. It exists. It is stored on a server. If a court orders that data to be handed over, the company will hand it over. It really is that simple. The idea that you can ask a machine how to hide a crime and expect that request to vanish into the ether is a special kind of ignorance.

How Law Enforcement Accesses Your Data

Digital forensics has evolved faster than the public understands. In the past, police needed physical access to a phone or a laptop to retrieve deleted files. Today, they just need the cloud. If you are logged into a service, your data is synced.

When police suspect a crime, they issue a warrant or a subpoena to the technology provider. They don't just ask for phone records. They ask for everything. They request Google search histories, social media DMs, and yes, your history with AI models.

Companies like OpenAI or Google generally have policies about user privacy, but those policies have clear carve-outs for legal compliance. If they are served a valid legal request from law enforcement, they will provide the data. There is no special encryption or "secret mode" that protects you from a court-ordered data dump.

The sheer volume of data collected is staggering. It is not just the prompt you wrote. It is the metadata attached to it. It is the timestamp. It is the IP address. It is the device ID. Investigators can build a timeline that is nearly impossible to refute. Trying to argue that a query was a "mistake" or "curiosity" becomes incredibly difficult when there is a documented history of related searches across multiple platforms.

The Myth Of The AI Accomplice

There is a strange psychological phenomenon occurring where people start viewing AI as an entity rather than a tool. Because these models can hold a conversation, people project personality and intent onto them. They think the AI is a partner. They think it's a co-conspirator.

The AI is none of those things. It is a predictive text algorithm. It is indifferent to your goals, your morality, or your survival. It has safety guardrails baked in by the developers precisely to prevent it from being used for illegal acts.

In many cases, the AI will refuse to answer a harmful prompt. It will trigger a safety block. But even the act of asking the question is logged. The developers know what you are trying to do. If the query is severe enough, it can trigger internal reviews. You are not just failing to get the advice you want; you are actively flagging your own account as a potential security risk.

Thinking you can outsmart the model by phrasing questions differently is another common trap. Sophisticated investigators use forensic tools to analyze intent across vast datasets. They look for patterns. They look for deviations from normal behavior. If you spend your time asking a chatbot about criminal activity, you are essentially leaving a trail of breadcrumbs leading straight to your doorstep.

Why Privacy Settings Are Not A Shield

Many users rely on "incognito" modes or "delete history" settings to feel secure. These features are designed for browser history and basic interface privacy. They do not prevent the backend data from being logged.

When you delete a chat, you are often just hiding it from your view on the dashboard. You are not purging the server logs. Companies keep data for various reasons, including model training, system debugging, and legal compliance. Assuming your data is gone forever because you clicked a "delete" button is a fatal error in judgment.

If you are involved in a criminal investigation, every digital button you clicked becomes a piece of evidence. The defense might try to argue about the accessibility or admissibility of the data, but the existence of the data is usually enough to sink a case.

The Future Of Digital Evidence

We are entering a phase where the definition of a witness has expanded. We used to rely on human testimony, physical evidence, and grainy security footage. Now, we rely on the digital output of algorithms.

The Jordan case is likely just the beginning. As AI becomes more integrated into daily life, it will be involved in more criminal trials. Defense attorneys will have to start navigating the complexities of AI-generated content and the validity of AI-stored data. It will be a battleground for years.

The most important takeaway here is not about the AI itself. It is about the permanence of your digital life. Every keystroke, every query, and every interaction you have with a digital interface is a permanent record. In an era of high-tech surveillance and deep digital integration, you have zero expectation of privacy when you are interacting with online services.

A Practical Reality Check

If you are thinking that you can use technology to facilitate a crime, stop. It is the easiest way to get caught.

  1. Accept the reality of logs: Every service you use creates a record. This record is discoverable.
  2. Understand the terms of service: Read the privacy policy. Understand that "privacy" does not mean "immunity."
  3. Recognize the risk: Law enforcement agencies are using data analytics to connect the dots faster than ever before. Your digital history is the first place they look.
  4. Assume everything is recorded: Act as if a detective is reading your screen while you type. If you wouldn't say it in a courtroom, don't type it into a prompt.

The digital world is not a lawless frontier. It is a glass house. Anyone with the right warrant can look inside. The sooner people realize that, the fewer digital confessions will be used to secure convictions in court. The data is waiting. The investigators know how to find it. Do not be the person who hands the evidence to them on a digital platter.

DT

Diego Torres

With expertise spanning multiple beats, Diego Torres brings a multidisciplinary perspective to every story, enriching coverage with context and nuance.