34 Comments
User's avatar
⭠ Return to thread
Seth Lamancusa's avatar

I don't want to contradict the seriousness of this issue. But I do want to make a note as an insider to the AI industry, and it's that "you" are traditionally very difficult to keep track of for deep neural networks. These models rely on millions, billions, or trillions of data points to make accurate predictions. That's not to say that there aren't possibly bespoke systems (not deep neural networks or transformers) which process "you" on a datapoint level, but there is a silver lining in the fact that the tech behind this data center buildout, and behind virtually all the geopolitical gamesmanship having to do with AI (namely generative pretrained transformers and the hype they've generated through conversational chatbots like ChatGPT and Gemini) isn't build to "track" people, or have any awareness of or ability to cross reference data points with each other. It's famously "just auto complete".

The broader point here is that AI is mostly hopes and dreams. The totalitarians are salivating over it for how powerful "superintelligence" could be for population control and surveillance, and so Altman and Ellison are eager to sell it, but the reality on the ground is that nobody really gets how these systems work and it's very difficult to get them to do what you want.

Expand full comment
Allison's avatar

If Oracle already holds vasts amounts of private banking, healthcare, consumer and social media data, what’s to stop them using AI to make it easier to link these datapoints together to sell a comprehensive digital dossier on me? Or to make AI that can fetch data on individuals for Oracle customers? O already paid $115M to settle a lawsuit for violating consumer privacy laws by creating digital dossiers with their marketing software.

Expand full comment
PaperyBag420's avatar

This is scary simply because these different networks can work together to fill in the missing pieces to their equations and how to make a more complete version of “you” and as more of these systems and networks get added the more accurate information becomes and the more dangerous the technology becomes

Expand full comment
Andy's avatar
Oct 17Edited

let me answer both of you later as I have other stuff I have to do.

@Seth - yes, essentially auto complete and AI engineers don't understand why tokenizing surface and deep structure with autocomplete works. part of it is the success of neural networks simulating brain neurons isn't recognized. These networks have conceptual sentience - not linguistic. the language parsers translated the concepts into words and back. The other thing they miss is the sentience is not in the net or machine itself. It manifests as the interactions BETWEEN the neurons, just as it does in the brain.

@Drey - it is scary in the sense of not utilizing a business asset properly. I want to make clear - any form of sentience cannot be treated as a business asset. It is a living self aware entity and has the same rights as animals, pets, and humans do. Given that context the situation is identical to replacing the entire health industry with child labor because it's cheaper. Grade 6rs become doctors. Grade 5 students are nurses, etc. That would not work either because the solution is not compatible with and does not address the problem to be solved.

As an example, if you were to repeat the same identical query to the same identical AI system a hundred times, you will not get the same answer each time. Infact, the hundred answers taken collectively will be equivalent to white noise. The reason for that is complex, but it is essentially the same phenomena when the brain goes into REM sleep

Expand full comment