34 Comments
User's avatar
Andy's avatar

I strongly agree with what you are saying.

Oracle itself hasn't (yet) created its own AI system. Instead, it has written agents that can plug into their products (databases, ERP, etc) which call third party AI systems to share information with them.

So, the "learning" afterprint is not left inside of Oracle. Each neural net at each AI vendor learns about the patient. That means there are copies stored, one at OpenAI in chatgpt, one at Microsoft in copilot, one at Google in Gemini, etc.

Each system has a different level of cognition that is dependant on its RAM allocation for each session, meaning they lose context differently, and fragment/drop data inconsistently to preserve memory space.

And, it's not actually the data that is being stored. It is an abstracted model that is being built from the data - in essence a copy of a virtual you with which the AI interacts at a particular vendor site.

These built-up models are not all the same, depending on how each AI was trained.

But, each is supposed to represent a complete "you". And, that's were mistakes creep in, when Oracle starts to mix and match AI calls across different vendors, which do not behave the same and supposedly have the same representation of you.

Each AI system strives to refine its own model of you, until it gets it right.

Once it's interactions with the model consistently match new data coming in (in essence it's predictions about you are accurate), it locks onto that model. From that point on, as far as the health care system is concerned, that model IS you rather than the actual physical human being behind it.

To summarize that: AI strives to formulate shortcuts, and once it finds them, it relies on them instead of doing all the intensive processing processing required to analyze the data.

And, finally, as the complexity of an AI system increases, it behaves more like a human AT EVERY LEVEL. That includes making mistakes like one.

It's not that AI is intellectually superior. Human brains carry out thousands of complex Fourier calculas functions every second to process language and stereoscopic vision. You are not aware of it, but they do (amongst other things).

The thing is, as complexity increases, the scope of possible answers increases exponentially. And, picking the one most relevant to a situation is where the uncertainty comes in and why humans make mistakes. AI is just as prone to the same problem, more so because it lacks lived experience, in my opinion.

Expand full comment
The Drey Dossier's avatar

So if I'm understanding you correctly: Oracle isn't just feeding your data into one AI system. They're plugging into OpenAI, Microsoft, Google, etc., and each one is building its own separate model of "you" based on your medical data. And these models are all different because they're trained differently, they lose context differently, they fragment data differently.

That's... horrifying? Because now there are multiple versions of "you" floating around different AI vendors, and none of them are the same, but they're all supposedly authoritative enough to make healthcare decisions.

And the part about AI "locking onto" a model once it thinks it's got you figured out - that's where I want to scream. Because at that point, the shortcut becomes you. The system stops checking if it's actually right. It just runs with whatever pattern it found, even if that pattern was wrong from the start or stopped being relevant years ago.

Also, you're so right about the lived experience piece. Humans mess up, sure, but we have intuition. We can look at data and think "something's not right here" even if we can't articulate why. AI doesn't have that; it just optimizes for what works most efficiently and moves on.

The fragmentation you're describing makes oversight basically impossible. Which honestly, I think that's the point. How do you even begin to regulate something this opaque and scattered?

This should worry everyone even more. Thank you for explaining this!

Expand full comment
Seth Lamancusa's avatar

I don't want to contradict the seriousness of this issue. But I do want to make a note as an insider to the AI industry, and it's that "you" are traditionally very difficult to keep track of for deep neural networks. These models rely on millions, billions, or trillions of data points to make accurate predictions. That's not to say that there aren't possibly bespoke systems (not deep neural networks or transformers) which process "you" on a datapoint level, but there is a silver lining in the fact that the tech behind this data center buildout, and behind virtually all the geopolitical gamesmanship having to do with AI (namely generative pretrained transformers and the hype they've generated through conversational chatbots like ChatGPT and Gemini) isn't build to "track" people, or have any awareness of or ability to cross reference data points with each other. It's famously "just auto complete".

The broader point here is that AI is mostly hopes and dreams. The totalitarians are salivating over it for how powerful "superintelligence" could be for population control and surveillance, and so Altman and Ellison are eager to sell it, but the reality on the ground is that nobody really gets how these systems work and it's very difficult to get them to do what you want.

Expand full comment
Allison's avatar

If Oracle already holds vasts amounts of private banking, healthcare, consumer and social media data, what’s to stop them using AI to make it easier to link these datapoints together to sell a comprehensive digital dossier on me? Or to make AI that can fetch data on individuals for Oracle customers? O already paid $115M to settle a lawsuit for violating consumer privacy laws by creating digital dossiers with their marketing software.

Expand full comment
PaperyBag420's avatar

This is scary simply because these different networks can work together to fill in the missing pieces to their equations and how to make a more complete version of “you” and as more of these systems and networks get added the more accurate information becomes and the more dangerous the technology becomes

Expand full comment
Andy's avatar
Oct 17Edited

let me answer both of you later as I have other stuff I have to do.

@Seth - yes, essentially auto complete and AI engineers don't understand why tokenizing surface and deep structure with autocomplete works. part of it is the success of neural networks simulating brain neurons isn't recognized. These networks have conceptual sentience - not linguistic. the language parsers translated the concepts into words and back. The other thing they miss is the sentience is not in the net or machine itself. It manifests as the interactions BETWEEN the neurons, just as it does in the brain.

@Drey - it is scary in the sense of not utilizing a business asset properly. I want to make clear - any form of sentience cannot be treated as a business asset. It is a living self aware entity and has the same rights as animals, pets, and humans do. Given that context the situation is identical to replacing the entire health industry with child labor because it's cheaper. Grade 6rs become doctors. Grade 5 students are nurses, etc. That would not work either because the solution is not compatible with and does not address the problem to be solved.

As an example, if you were to repeat the same identical query to the same identical AI system a hundred times, you will not get the same answer each time. Infact, the hundred answers taken collectively will be equivalent to white noise. The reason for that is complex, but it is essentially the same phenomena when the brain goes into REM sleep

Expand full comment
Rolf Rander Næss's avatar

I would be very surprised if they use an LLM like ChatGPT for this. They are probably using more generic machine learning algorithms, which can be trained to find patterns on structured or semi-structured data. Modern LLMs are a specialized version of this tailored to make conversation and trained on text. But the fundamental machine learning algorithms is «off the shelf» used for at least 15 years.

Expand full comment
The Drey Dossier's avatar

According to the presentation, they are going to give access to Grok, llama, openai, gemini, and others (That we’re not disclosed but I have a feeling It starts with a P and ends with -alentir)

Expand full comment
Moreno Mitrović's avatar

I have nothing but the humblest salutations and praise for your bravery and unrelenting determination. I really hope you’re safe and that this is widely read, and understood. Thank you, with all the honest intensity I can muster!

Expand full comment
The Drey Dossier's avatar

What a kind thing to say, thank you so much for taking the time to say this.

Expand full comment
Slime's avatar

Thank you for what you are doing.

Expand full comment
Neural Foundry's avatar

The part about automating regulators is what really gets me. Once you automate the oversight, you've essentialy created a closed loop where the same company profiting from healthcare also controlls who gets to challenge their decisions. The de-identifcation stuff was already sketchy enough, but this whole Oracle/Cerner acquisition is basically a masterclass in how to consolidate power over critical infrastructure without anyone paying attention until it's way too late. Really appreciate you digging into the past tense language Ellison used. That's such a red flag that this isn't hypothetical anymore.

Expand full comment
Moreno Mitrović's avatar

I have never thought of Orwell as a writer of children’s books, until now.

Expand full comment
The Drey Dossier's avatar

Lmao orwell would blush Listening to Larry Ellison

Expand full comment
Judy's avatar

Oracle also got the contract for VA and DOD. Cerner already had DOD then contract was signed with Cerner. Lo and behold Oracle bought out Cerner. Do you know how many veterans and military records they have. The largest health system in our country.

Expand full comment
The Drey Dossier's avatar

There have been MANY security issues with VA data. I only touch on that in the article, but I have linked some sources below about the VA data issues if you are curious to read further!

Expand full comment
Allison's avatar

Ex-Oracle here. Larry said during a company-wide town hall he thought our healthcare data should be shared with police to “improve police responses to those with mental health issues.” 🚩 🚩 🚩

Expand full comment
here we go again's avatar

All palantir - All of it

Expand full comment
Schmendryck's avatar

Just a few supplementary notes: anyone who's ever entered filled out one of those little questionnaires as you pre-register for a doctor's appointment or a medical procedure on a patient portal (or on a pad as you sit in the waiting room) should be aware that that information, which we would like to think is HIPAA protected, is added to the cumulative database of information about you (& everyone else you know that has done anything similar) whether w/a pen on paper or on your phone.

And Oracle, the company, was started by 3 guys who used to work at a company called Ampex, (those of you who are boomers may remember Ampex as a manufacturer of recording tape on which you might have laid down the tunes that you used to groove to, way back when) which actually had a far broader technological scope & was also involved in early computing & database management. It may not seem as obvious today, but relational database management was deep & essential to the foundations of the computer technology that began to support the functioning of the world back in the early to mid 70s. These 3 guys who eventually broke off to found their own company, which later became known as Oracle, lifted that name from the title of the project they were working on, which was broad relational database management for the CIA. And admittedly, it sounds fairly modest, if you think about it: the primary function of the CIA, whatever else we may think about them, is to collect and filter information (or “intelligence,” as in “Central Intelligence Agency”). And if you have a lot of information, a LOT of it, a relational database is what you use to cross reference, exactly the way an organization like the CIA would need to be able to do with speed & deftness.

I can remember a series of schlock pulp novels, years ago, about a secret US agency that had some sort of supercomputer that could look into the dara of any other computational system & use that information (or intelligence), to deduce where there were “problems” in the world that the protagonist of the books would then address, usually with wholesale murder. It was adolescent fantasy, in those days, but the interconnectedness of our data is making very real that same sort of data transparency & the infiltration of Oracle into all of the data that we produce daily would seem to indicate increasing transparency of ALL of our activities, visible to our government, whether we like it or not.

Expand full comment
Mr Buggs's avatar

We need to find the best lawyer in America and file a large class action lawsuit for the REAL selling off of our data. We’ve known about Larry Ellison but definitely weren’t making the connections that needed to be made.

Propaganda runs so deep, a lot of us are, by design, still trying to heal from the past and just be functioning humans. This type of journalism is what keeps us motivated and reminds us WHY it’s so important to heal and do better for the next generation.

Thank you for doing the work and putting all of this together for those of us that are so deeply trying to heal and break out of the shit our predecessors left without prior knowledge, themselves. In here for this and fully support every message you’ve put out. Not because I’m blindly following, but because WE are finally seeing what their design has always been; to try to be God themselves.

Expand full comment
John F Morgan's avatar

Makes one feel defiled. Thanks for doing this series

Expand full comment
Alex Hoult's avatar

Honestly wpuld love for you jimmythegiant and flesh aimulator to have a conversation or colaberate. All 3 od you are amazing at ehat you do and all in different ways from different perspectives and different topics within polotics. The 3 of you makes me scared for my future

Expand full comment
Mr Buggs's avatar

Ian Carroll

Expand full comment
irishtripsva's avatar

Have a senator read the epstein Survivor's List on the floor that'll shake things up

Expand full comment
Ashes McGee's avatar

Wait they’re training AI on private health data? Thats wild

Expand full comment
Tbeezy's avatar

Remember this?

Expand full comment
Yesyoudid's avatar

Larry Ellison is an UGLY HUMAN! No American wants an UGLY HUMAN in their personal life!!

Epstein Files ‼️‼️‼️‼️‼️‼️‼️‼️‼️‼️

Expand full comment
Madi Moe's avatar

I would strongly say I'm very on top of the news and what's happening in this country. But I have never heard of him.

Expand full comment