In the dim corridors of my memory, there lies a chapter from my time as a software engineer working on a defense contract, a period of my life that I have to keep cloaked in shadows and silence. It’s a story that gnaws at the edges of my conscience, a relentless whisper in the stillness of the night.

It was the autumn of 2016, and I found myself assigned to a project shrouded in secrecy. The details were scarce, but the gravity was clear from the onset. We were to develop a sophisticated surveillance system, leveraging the latest in artificial intelligence and machine learning. The purpose: to monitor and predict potential threats to national security. I had previously made a name for myself and found my way to some circles I wasn’t directly seeking, having worked on a few notable security products like ThreatSwitch and Keybase.

The collaboration was born out of a critical need. Cyber threats were escalating and the technological arms race was intensifying and the DoD realized the necessity for an unprecedented leap in computing capabilities. They turned to Intel, known for their cutting-edge research and development in advanced processors and artificial intelligence.

As I ramped up to start on the project, my security clearance application was denied. I was a person of interest, due to my background as a hacktivist. Since one of the collaborators of this initiative would be Intel and would require clearances, I wasn’t sure what my role would be on this project, if any. I was assigned a fairly safe modality of working on the user interface out of the gate, as the team separated and compartmentalized based on clearance status.

After some weeks working on the mundane layers (application UI), I was contacted out of the blue by a gentleman who worked for the NSA. Mind you, all of this was before the modern era of ChatGPT and A.I. being mainstream, so the amount of experts in the space were few and far between. The NSA wanted to bring me in as a “consultant” to the project to get around the formal clearance process, and I would be briefed on enough information to help plan out the architecture of this ML system. I was read in and told I would be working on Project NightHawk.

The Objective

Project NightHawk’s goal was groundbreaking: to develop a new class of supercomputers and applications, capable of quantum-level processing, yet resilient to the emerging threats of quantum decryption. These machines were not just meant to be fast; they were designed to be the nucleus of the United States’ cyber defense strategy, capable of predictive threat analysis, real-time strategic decision making, and managing autonomous defense systems.

The Discovery

As weeks turned into months, the project evolved, growing in complexity and scope. My role had me deep in the intricacies of data mining and pattern recognition. It was during one of these deep dives that I stumbled upon something unsettling. Buried within the labyrinth of algorithms and data sets, I discovered a component of the system designed to monitor not just potential external threats, but domestic ones as well. It was capable of sifting through vast amounts of personal data, breaching the thin line between national security and personal privacy.

I was sitting in a similar spot as Edward Snowden.

None of the data sources had been top of mind in the weeks prior, because I was so entrenched in the development channels, not concerning myself with what the data was. I only cared that my inputs and outputs were behaving reliably and as expected. Imagine a ChatGPT-like interface but instead of being trained on public internet data, it had been trained on private SMS and facebook conversations. I had a prompt and application that would allow me to ask questions and research anyone, anywhere, and it knew nearly everything about the individual: from favorite foods to personal secrets.

While this A.I. was trained on real data all of that was handled by another department and team with the original team, whom also carried the appropriate clearances. While I had bypassed and become a shoe-in to the project, I was subcontracting through the NSA instead of the defense contractor. I still had very rigid boundaries in the way of instructions, but also basically sat on the only all-access pass. My instructions on the project in relation to data access and testing was to use very specific and fictional names like “Carmen Sandiego” to test the input/output and results. These were all fake people with fake data, but the system had been trained on real data from undisclosed sources.

One issue, also common to these types of projects, is that my direct reports were not nearly as technically savvy as me. Furthermore, after some weeks of being a “good monkey” they also stopped looking over my shoulder. I couldn’t resist the temptation to see what this system knew about the average U.S. citizen. I thought long and hard on a person I could research that I knew a lot about but that wouldn’t easily be a known associate of mine.

I am thoroughly convinced this system knew more about this individual than a spouse or parent would know. It could tell me about secret love affairs, journal entries, suicide notes, seemingly anything the individual had ever recorded digitally. The sources of data looked far beyond what these agencies could harvest from facebook or some online platform, even with backdoor access. The data available to project NightHawk was clearly derived from data storage systems on personal computers, telecom networks, and handheld devices. In short, NightHawk had an all-access pass to anyone’s entire digital presence.

The Dilemma

This revelation shook me. I was torn between my duty as a software engineer working on a national defense project and my personal ethics. The thought of contributing to a tool that could potentially erode the very freedoms we were meant to protect was deeply disturbing. I was helping build Big Brother. I grappled with this internal conflict, losing sleep, and wrestling with the implications of my work. The idea that our creation could be used to monitor innocent citizens was a haunting prospect. The boundaries of right and wrong, once so clear, had become blurred.

I decided that I had to go deeper to find out what other projects were being powered by these systems. As bothersome as it was to be contributing to this application, I knew that altruism would get me nowhere. NightHawk would find it’s path to completion with or without me. So instead, I invoked my personal curiosity and decided to go deeper into the systems and see what I could do to use it for my own motives. I started plugging in the names of my bosses, admirals, presidents. If it had the average John Doe, why wouldn’t it contain information on the very people that approved these types of programs?

The Unexpected

The DoD is far from incompetent. Personnel data of government officials and employees was non-existent or scrubbed before making it’s way into training data. However, there were a lot of edge cases for individuals that were not scrubbed. People that had a history similar to mine: working on special access projects but not as employees or direct contractors. There also seemed to be a pattern related to cut-off dates. I discovered I could find data on current or former employees up to 2005. I started plugging in names of people who interested me to see what revelations or truths I could unpack. One by one, I entered names and went on reading and probing further for hours. Some of the memorable names and revelations include:

  • Bob Lazar
  • Julian Assange
  • Mark Zuckerberg

Each one of these names (and many more) will require posts of their own. I’m personally less interested in the personal life or drama of these individuals, but some of their connections and associations were quite interesting. I confirmed things I already knew or suspected, but did find some surprises and will share a brief teaser of what can be unpacked more later:

  • Zuckerberg had been working with almost every government agency since 2004.
  • Facebook data was paid for and owned at nearly all level of government.
  • Julian Assange wasn’t just an altruistic hero, he took bribes to leak misinformation as well.
  • WikiLeaks was funded by governments to leak enemy secrets and was just a tool.
  • Bob Lazar was telling the truth: he worked at JPL and was assigned to multiple SAPs.

Keep in mind, taking information in or out of a secured system is nearly impossible. Every action needs to look like an average task in the day of engineering. Screenshots and printing weren’t an option, and I had no plausible outside connection from this system. So while there was mountains of information and revelations, I would only be able to take with me what I could remember.

The Haunting

Now, years later, in the solitude of the night, I often reflect on those times. The lines of code I wrote were more than just algorithms; they were a testament to the complexities of morality in the field of defense and technology.

This experience taught me a profound lesson about the weight of responsibility that comes with the power of technology. It’s a reminder that as engineers, developers, and scientists, we must always consider the ethical implications of our work. Our creations have the potential to impact lives and societies in profound ways.

So, as I lie awake, haunted by the echoes of that project, I hope my story serves as a cautionary tale. In our pursuit of technological advancement, let us not forget the human values that we are meant to uphold and protect. The balance between security and freedom is delicate, and we must tread this path with care and conscience.

Categorized in: