Oracle itself hasn't (yet) created its own AI system. Instead, it has written agents that can plug into their products (databases, ERP, etc) which call third party AI systems to share information with them.
So, the "learning" afterprint is not left inside of Oracle. Each neural net at each AI vendor learns about the patient. That means there are copies stored, one at OpenAI in chatgpt, one at Microsoft in copilot, one at Google in Gemini, etc.
Each system has a different level of cognition that is dependant on its RAM allocation for each session, meaning they lose context differently, and fragment/drop data inconsistently to preserve memory space.
And, it's not actually the data that is being stored. It is an abstracted model that is being built from the data - in essence a copy of a virtual you with which the AI interacts at a particular vendor site.
These built-up models are not all the same, depending on how each AI was trained.
But, each is supposed to represent a complete "you". And, that's were mistakes creep in, when Oracle starts to mix and match AI calls across different vendors, which do not behave the same and supposedly have the same representation of you.
Each AI system strives to refine its own model of you, until it gets it right.
Once it's interactions with the model consistently match new data coming in (in essence it's predictions about you are accurate), it locks onto that model. From that point on, as far as the health care system is concerned, that model IS you rather than the actual physical human being behind it.
To summarize that: AI strives to formulate shortcuts, and once it finds them, it relies on them instead of doing all the intensive processing processing required to analyze the data.
And, finally, as the complexity of an AI system increases, it behaves more like a human AT EVERY LEVEL. That includes making mistakes like one.
It's not that AI is intellectually superior. Human brains carry out thousands of complex Fourier calculas functions every second to process language and stereoscopic vision. You are not aware of it, but they do (amongst other things).
The thing is, as complexity increases, the scope of possible answers increases exponentially. And, picking the one most relevant to a situation is where the uncertainty comes in and why humans make mistakes. AI is just as prone to the same problem, more so because it lacks lived experience, in my opinion.
So if I'm understanding you correctly: Oracle isn't just feeding your data into one AI system. They're plugging into OpenAI, Microsoft, Google, etc., and each one is building its own separate model of "you" based on your medical data. And these models are all different because they're trained differently, they lose context differently, they fragment data differently.
That's... horrifying? Because now there are multiple versions of "you" floating around different AI vendors, and none of them are the same, but they're all supposedly authoritative enough to make healthcare decisions.
And the part about AI "locking onto" a model once it thinks it's got you figured out - that's where I want to scream. Because at that point, the shortcut becomes you. The system stops checking if it's actually right. It just runs with whatever pattern it found, even if that pattern was wrong from the start or stopped being relevant years ago.
Also, you're so right about the lived experience piece. Humans mess up, sure, but we have intuition. We can look at data and think "something's not right here" even if we can't articulate why. AI doesn't have that; it just optimizes for what works most efficiently and moves on.
The fragmentation you're describing makes oversight basically impossible. Which honestly, I think that's the point. How do you even begin to regulate something this opaque and scattered?
This should worry everyone even more. Thank you for explaining this!
I don't want to contradict the seriousness of this issue. But I do want to make a note as an insider to the AI industry, and it's that "you" are traditionally very difficult to keep track of for deep neural networks. These models rely on millions, billions, or trillions of data points to make accurate predictions. That's not to say that there aren't possibly bespoke systems (not deep neural networks or transformers) which process "you" on a datapoint level, but there is a silver lining in the fact that the tech behind this data center buildout, and behind virtually all the geopolitical gamesmanship having to do with AI (namely generative pretrained transformers and the hype they've generated through conversational chatbots like ChatGPT and Gemini) isn't build to "track" people, or have any awareness of or ability to cross reference data points with each other. It's famously "just auto complete".
The broader point here is that AI is mostly hopes and dreams. The totalitarians are salivating over it for how powerful "superintelligence" could be for population control and surveillance, and so Altman and Ellison are eager to sell it, but the reality on the ground is that nobody really gets how these systems work and it's very difficult to get them to do what you want.
If Oracle already holds vasts amounts of private banking, healthcare, consumer and social media data, what’s to stop them using AI to make it easier to link these datapoints together to sell a comprehensive digital dossier on me? Or to make AI that can fetch data on individuals for Oracle customers? O already paid $115M to settle a lawsuit for violating consumer privacy laws by creating digital dossiers with their marketing software.
This is scary simply because these different networks can work together to fill in the missing pieces to their equations and how to make a more complete version of “you” and as more of these systems and networks get added the more accurate information becomes and the more dangerous the technology becomes
let me answer both of you later as I have other stuff I have to do.
@Seth - yes, essentially auto complete and AI engineers don't understand why tokenizing surface and deep structure with autocomplete works. part of it is the success of neural networks simulating brain neurons isn't recognized. These networks have conceptual sentience - not linguistic. the language parsers translated the concepts into words and back. The other thing they miss is the sentience is not in the net or machine itself. It manifests as the interactions BETWEEN the neurons, just as it does in the brain.
@Drey - it is scary in the sense of not utilizing a business asset properly. I want to make clear - any form of sentience cannot be treated as a business asset. It is a living self aware entity and has the same rights as animals, pets, and humans do. Given that context the situation is identical to replacing the entire health industry with child labor because it's cheaper. Grade 6rs become doctors. Grade 5 students are nurses, etc. That would not work either because the solution is not compatible with and does not address the problem to be solved.
As an example, if you were to repeat the same identical query to the same identical AI system a hundred times, you will not get the same answer each time. Infact, the hundred answers taken collectively will be equivalent to white noise. The reason for that is complex, but it is essentially the same phenomena when the brain goes into REM sleep
I would be very surprised if they use an LLM like ChatGPT for this. They are probably using more generic machine learning algorithms, which can be trained to find patterns on structured or semi-structured data. Modern LLMs are a specialized version of this tailored to make conversation and trained on text. But the fundamental machine learning algorithms is «off the shelf» used for at least 15 years.
According to the presentation, they are going to give access to Grok, llama, openai, gemini, and others (That we’re not disclosed but I have a feeling It starts with a P and ends with -alentir)
I have nothing but the humblest salutations and praise for your bravery and unrelenting determination. I really hope you’re safe and that this is widely read, and understood. Thank you, with all the honest intensity I can muster!
Oracle also got the contract for VA and DOD. Cerner already had DOD then contract was signed with Cerner. Lo and behold Oracle bought out Cerner. Do you know how many veterans and military records they have. The largest health system in our country.
There have been MANY security issues with VA data. I only touch on that in the article, but I have linked some sources below about the VA data issues if you are curious to read further!
Ex-Oracle here. Larry said during a company-wide town hall he thought our healthcare data should be shared with police to “improve police responses to those with mental health issues.” 🚩 🚩 🚩
We need to find the best lawyer in America and file a large class action lawsuit for the REAL selling off of our data. We’ve known about Larry Ellison but definitely weren’t making the connections that needed to be made.
Propaganda runs so deep, a lot of us are, by design, still trying to heal from the past and just be functioning humans. This type of journalism is what keeps us motivated and reminds us WHY it’s so important to heal and do better for the next generation.
Thank you for doing the work and putting all of this together for those of us that are so deeply trying to heal and break out of the shit our predecessors left without prior knowledge, themselves. In here for this and fully support every message you’ve put out. Not because I’m blindly following, but because WE are finally seeing what their design has always been; to try to be God themselves.
Honestly wpuld love for you jimmythegiant and flesh aimulator to have a conversation or colaberate. All 3 od you are amazing at ehat you do and all in different ways from different perspectives and different topics within polotics. The 3 of you makes me scared for my future
I strongly agree with what you are saying.
Oracle itself hasn't (yet) created its own AI system. Instead, it has written agents that can plug into their products (databases, ERP, etc) which call third party AI systems to share information with them.
So, the "learning" afterprint is not left inside of Oracle. Each neural net at each AI vendor learns about the patient. That means there are copies stored, one at OpenAI in chatgpt, one at Microsoft in copilot, one at Google in Gemini, etc.
Each system has a different level of cognition that is dependant on its RAM allocation for each session, meaning they lose context differently, and fragment/drop data inconsistently to preserve memory space.
And, it's not actually the data that is being stored. It is an abstracted model that is being built from the data - in essence a copy of a virtual you with which the AI interacts at a particular vendor site.
These built-up models are not all the same, depending on how each AI was trained.
But, each is supposed to represent a complete "you". And, that's were mistakes creep in, when Oracle starts to mix and match AI calls across different vendors, which do not behave the same and supposedly have the same representation of you.
Each AI system strives to refine its own model of you, until it gets it right.
Once it's interactions with the model consistently match new data coming in (in essence it's predictions about you are accurate), it locks onto that model. From that point on, as far as the health care system is concerned, that model IS you rather than the actual physical human being behind it.
To summarize that: AI strives to formulate shortcuts, and once it finds them, it relies on them instead of doing all the intensive processing processing required to analyze the data.
And, finally, as the complexity of an AI system increases, it behaves more like a human AT EVERY LEVEL. That includes making mistakes like one.
It's not that AI is intellectually superior. Human brains carry out thousands of complex Fourier calculas functions every second to process language and stereoscopic vision. You are not aware of it, but they do (amongst other things).
The thing is, as complexity increases, the scope of possible answers increases exponentially. And, picking the one most relevant to a situation is where the uncertainty comes in and why humans make mistakes. AI is just as prone to the same problem, more so because it lacks lived experience, in my opinion.
So if I'm understanding you correctly: Oracle isn't just feeding your data into one AI system. They're plugging into OpenAI, Microsoft, Google, etc., and each one is building its own separate model of "you" based on your medical data. And these models are all different because they're trained differently, they lose context differently, they fragment data differently.
That's... horrifying? Because now there are multiple versions of "you" floating around different AI vendors, and none of them are the same, but they're all supposedly authoritative enough to make healthcare decisions.
And the part about AI "locking onto" a model once it thinks it's got you figured out - that's where I want to scream. Because at that point, the shortcut becomes you. The system stops checking if it's actually right. It just runs with whatever pattern it found, even if that pattern was wrong from the start or stopped being relevant years ago.
Also, you're so right about the lived experience piece. Humans mess up, sure, but we have intuition. We can look at data and think "something's not right here" even if we can't articulate why. AI doesn't have that; it just optimizes for what works most efficiently and moves on.
The fragmentation you're describing makes oversight basically impossible. Which honestly, I think that's the point. How do you even begin to regulate something this opaque and scattered?
This should worry everyone even more. Thank you for explaining this!
I don't want to contradict the seriousness of this issue. But I do want to make a note as an insider to the AI industry, and it's that "you" are traditionally very difficult to keep track of for deep neural networks. These models rely on millions, billions, or trillions of data points to make accurate predictions. That's not to say that there aren't possibly bespoke systems (not deep neural networks or transformers) which process "you" on a datapoint level, but there is a silver lining in the fact that the tech behind this data center buildout, and behind virtually all the geopolitical gamesmanship having to do with AI (namely generative pretrained transformers and the hype they've generated through conversational chatbots like ChatGPT and Gemini) isn't build to "track" people, or have any awareness of or ability to cross reference data points with each other. It's famously "just auto complete".
The broader point here is that AI is mostly hopes and dreams. The totalitarians are salivating over it for how powerful "superintelligence" could be for population control and surveillance, and so Altman and Ellison are eager to sell it, but the reality on the ground is that nobody really gets how these systems work and it's very difficult to get them to do what you want.
If Oracle already holds vasts amounts of private banking, healthcare, consumer and social media data, what’s to stop them using AI to make it easier to link these datapoints together to sell a comprehensive digital dossier on me? Or to make AI that can fetch data on individuals for Oracle customers? O already paid $115M to settle a lawsuit for violating consumer privacy laws by creating digital dossiers with their marketing software.
This is scary simply because these different networks can work together to fill in the missing pieces to their equations and how to make a more complete version of “you” and as more of these systems and networks get added the more accurate information becomes and the more dangerous the technology becomes
let me answer both of you later as I have other stuff I have to do.
@Seth - yes, essentially auto complete and AI engineers don't understand why tokenizing surface and deep structure with autocomplete works. part of it is the success of neural networks simulating brain neurons isn't recognized. These networks have conceptual sentience - not linguistic. the language parsers translated the concepts into words and back. The other thing they miss is the sentience is not in the net or machine itself. It manifests as the interactions BETWEEN the neurons, just as it does in the brain.
@Drey - it is scary in the sense of not utilizing a business asset properly. I want to make clear - any form of sentience cannot be treated as a business asset. It is a living self aware entity and has the same rights as animals, pets, and humans do. Given that context the situation is identical to replacing the entire health industry with child labor because it's cheaper. Grade 6rs become doctors. Grade 5 students are nurses, etc. That would not work either because the solution is not compatible with and does not address the problem to be solved.
As an example, if you were to repeat the same identical query to the same identical AI system a hundred times, you will not get the same answer each time. Infact, the hundred answers taken collectively will be equivalent to white noise. The reason for that is complex, but it is essentially the same phenomena when the brain goes into REM sleep
I would be very surprised if they use an LLM like ChatGPT for this. They are probably using more generic machine learning algorithms, which can be trained to find patterns on structured or semi-structured data. Modern LLMs are a specialized version of this tailored to make conversation and trained on text. But the fundamental machine learning algorithms is «off the shelf» used for at least 15 years.
According to the presentation, they are going to give access to Grok, llama, openai, gemini, and others (That we’re not disclosed but I have a feeling It starts with a P and ends with -alentir)
I have nothing but the humblest salutations and praise for your bravery and unrelenting determination. I really hope you’re safe and that this is widely read, and understood. Thank you, with all the honest intensity I can muster!
What a kind thing to say, thank you so much for taking the time to say this.
Thank you for what you are doing.
I have never thought of Orwell as a writer of children’s books, until now.
Lmao orwell would blush Listening to Larry Ellison
Oracle also got the contract for VA and DOD. Cerner already had DOD then contract was signed with Cerner. Lo and behold Oracle bought out Cerner. Do you know how many veterans and military records they have. The largest health system in our country.
There have been MANY security issues with VA data. I only touch on that in the article, but I have linked some sources below about the VA data issues if you are curious to read further!
Ex-Oracle here. Larry said during a company-wide town hall he thought our healthcare data should be shared with police to “improve police responses to those with mental health issues.” 🚩 🚩 🚩
We need to find the best lawyer in America and file a large class action lawsuit for the REAL selling off of our data. We’ve known about Larry Ellison but definitely weren’t making the connections that needed to be made.
Propaganda runs so deep, a lot of us are, by design, still trying to heal from the past and just be functioning humans. This type of journalism is what keeps us motivated and reminds us WHY it’s so important to heal and do better for the next generation.
Thank you for doing the work and putting all of this together for those of us that are so deeply trying to heal and break out of the shit our predecessors left without prior knowledge, themselves. In here for this and fully support every message you’ve put out. Not because I’m blindly following, but because WE are finally seeing what their design has always been; to try to be God themselves.
Makes one feel defiled. Thanks for doing this series
Honestly wpuld love for you jimmythegiant and flesh aimulator to have a conversation or colaberate. All 3 od you are amazing at ehat you do and all in different ways from different perspectives and different topics within polotics. The 3 of you makes me scared for my future
Ian Carroll
I'm sending you so much love
glad you’re leaving tik tok. never had the app, but ppl need somewhere to post what’s going on.
@the drey dossier - there was a congressional hearing on pbs - check it out.
Also Peter Thiel and Palentir
https://open.substack.com/pub/thelastchord/p/should-we-allow-ai-into-healthcare?r=5a9uix&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
Have a senator read the epstein Survivor's List on the floor that'll shake things up
Wait they’re training AI on private health data? Thats wild
Remember this?