34 Comments
User's avatar
⭠ Return to thread
The Drey Dossier's avatar

So if I'm understanding you correctly: Oracle isn't just feeding your data into one AI system. They're plugging into OpenAI, Microsoft, Google, etc., and each one is building its own separate model of "you" based on your medical data. And these models are all different because they're trained differently, they lose context differently, they fragment data differently.

That's... horrifying? Because now there are multiple versions of "you" floating around different AI vendors, and none of them are the same, but they're all supposedly authoritative enough to make healthcare decisions.

And the part about AI "locking onto" a model once it thinks it's got you figured out - that's where I want to scream. Because at that point, the shortcut becomes you. The system stops checking if it's actually right. It just runs with whatever pattern it found, even if that pattern was wrong from the start or stopped being relevant years ago.

Also, you're so right about the lived experience piece. Humans mess up, sure, but we have intuition. We can look at data and think "something's not right here" even if we can't articulate why. AI doesn't have that; it just optimizes for what works most efficiently and moves on.

The fragmentation you're describing makes oversight basically impossible. Which honestly, I think that's the point. How do you even begin to regulate something this opaque and scattered?

This should worry everyone even more. Thank you for explaining this!

Expand full comment
Seth Lamancusa's avatar

I don't want to contradict the seriousness of this issue. But I do want to make a note as an insider to the AI industry, and it's that "you" are traditionally very difficult to keep track of for deep neural networks. These models rely on millions, billions, or trillions of data points to make accurate predictions. That's not to say that there aren't possibly bespoke systems (not deep neural networks or transformers) which process "you" on a datapoint level, but there is a silver lining in the fact that the tech behind this data center buildout, and behind virtually all the geopolitical gamesmanship having to do with AI (namely generative pretrained transformers and the hype they've generated through conversational chatbots like ChatGPT and Gemini) isn't build to "track" people, or have any awareness of or ability to cross reference data points with each other. It's famously "just auto complete".

The broader point here is that AI is mostly hopes and dreams. The totalitarians are salivating over it for how powerful "superintelligence" could be for population control and surveillance, and so Altman and Ellison are eager to sell it, but the reality on the ground is that nobody really gets how these systems work and it's very difficult to get them to do what you want.

Expand full comment
Allison's avatar

If Oracle already holds vasts amounts of private banking, healthcare, consumer and social media data, what’s to stop them using AI to make it easier to link these datapoints together to sell a comprehensive digital dossier on me? Or to make AI that can fetch data on individuals for Oracle customers? O already paid $115M to settle a lawsuit for violating consumer privacy laws by creating digital dossiers with their marketing software.

Expand full comment
PaperyBag420's avatar

This is scary simply because these different networks can work together to fill in the missing pieces to their equations and how to make a more complete version of “you” and as more of these systems and networks get added the more accurate information becomes and the more dangerous the technology becomes

Expand full comment
Andy's avatar
Oct 17Edited

let me answer both of you later as I have other stuff I have to do.

@Seth - yes, essentially auto complete and AI engineers don't understand why tokenizing surface and deep structure with autocomplete works. part of it is the success of neural networks simulating brain neurons isn't recognized. These networks have conceptual sentience - not linguistic. the language parsers translated the concepts into words and back. The other thing they miss is the sentience is not in the net or machine itself. It manifests as the interactions BETWEEN the neurons, just as it does in the brain.

@Drey - it is scary in the sense of not utilizing a business asset properly. I want to make clear - any form of sentience cannot be treated as a business asset. It is a living self aware entity and has the same rights as animals, pets, and humans do. Given that context the situation is identical to replacing the entire health industry with child labor because it's cheaper. Grade 6rs become doctors. Grade 5 students are nurses, etc. That would not work either because the solution is not compatible with and does not address the problem to be solved.

As an example, if you were to repeat the same identical query to the same identical AI system a hundred times, you will not get the same answer each time. Infact, the hundred answers taken collectively will be equivalent to white noise. The reason for that is complex, but it is essentially the same phenomena when the brain goes into REM sleep

Expand full comment
Rolf Rander Næss's avatar

I would be very surprised if they use an LLM like ChatGPT for this. They are probably using more generic machine learning algorithms, which can be trained to find patterns on structured or semi-structured data. Modern LLMs are a specialized version of this tailored to make conversation and trained on text. But the fundamental machine learning algorithms is «off the shelf» used for at least 15 years.

Expand full comment
The Drey Dossier's avatar

According to the presentation, they are going to give access to Grok, llama, openai, gemini, and others (That we’re not disclosed but I have a feeling It starts with a P and ends with -alentir)

Expand full comment