5 stories
·
0 followers

Repair Video

1 Comment and 8 Shares
The statue should be in the likeness of whatever sculptor posted the sculpting tool repair video that was most helpful during the installation of the statue.
Read the whole story
hoz
1 hour ago
reply
Share this story
Delete
1 public comment
alt_text_bot
22 days ago
reply
The statue should be in the likeness of whatever sculptor posted the sculpting tool repair video that was most helpful during the installation of the statue.

Why Stories Make You Smarter Than Self-Help Books

1 Share

I spend a decent amount of time at bookstores, and I’ve noticed something.

The adults browsing the fiction section fall into two distinct camps: college students reaching for Penguin paperbacks, and seventy-year-old professors emeriti in tweed jackets. Meanwhile, the self-help aisle is packed with everyone else, thirty-somethings in business casual leafing through "Atomic Habits" and "The 7 Habits of Highly Effective People."

The distribution is bimodal, and I think I know why.

The young read fiction because they haven't yet learned to be embarrassed by imagination. The genuinely brilliant read fiction because they've looped back around to understanding that pure information transfer is the least interesting thing a book can do. But there's a vast middle ground of people who have just enough education to feel insecure about it, and these folks read non-fiction exclusively. They read because they love being seen learning, more than they love the process of it. I know. I’ve been one of ‘em, at various points in my life.

The dirty secret about non-fiction is that most of it could be a blog post.

These books follow a template: introduce a counterintuitive finding, tell three anecdotes that illustrate it, mention some studies (p < 0.05, naturally), provide a framework with a memorable acronym, conclude with actionable advice. Stretch this to 250 pages, add some graphs, and you have a bestseller. The information density is incredibly low. There are zero complex systems of thought to impart; you're learning to repeat interesting-sounding facts.

Fiction (by contrast) smuggles actual complexity into your brain. When Dostoevsky spends fifty pages letting Raskolnikov justify murder to himself, you're living inside a mind that's trying to reason its way to atrocity. You understand something about human rationalization that no Gladwell volume could teach you. The knowledge comes embedded in context, emotion, and contradiction.

It can't be reduced.

I suspect this is why the smartest people I know tend to quote novels more than they quote non-fiction. They'll reference the Grand Inquisitor or mention something about whales etc, and these literary touchstones carry more meaning than any TED talk summary ever could. The metaphors are load-bearing. They contain compressed wisdom that unfolds differently each time you examine it.

What Tolkien accomplished in "The Lord of the Rings" eclipses any and every non-fiction book ever published about leadership or virtue or the nature of power. Middle-earth presents a complete moral universe where power corrupts absolutely, where the small and humble accomplish what the mighty cannot, where mercy and pity have unexpected consequences. You absorb these lessons through narrative, through watching characters make choices and face their results. The Ring is a better illustration of the corrosive nature of power than anything in The 42 Laws - because it's a metaphor, and metaphors work on you in ways that direct statements can’t // won’t // don’t.

There's a reason every major religion transmits its deepest truths through parables rather than propositions. The various authors of the Bible could have written "Seven Habits of Highly Effective Disciples" but instead, they told stories about seeds and soil, about lost coins and prodigal sons. The Buddha could have published "Mindfulness for Beginners" but instead there are koans and sutras full of contradictory wisdom.

Pure information transfer fails to change people.

Stories work.

The “midwit” trap is thinking that explicit instruction is superior to implicit understanding. Someone reads "How to Win Friends and Influence People" and learns techniques. Someone reads "The Unbearable Lightness of Being" and learns what it feels like to be every person in every kind of relationship, to watch love curdle into resentment, to see how societies constrain and shape individual choices.

Which knowledge is more useful?

Which makes you wise?

I used to think I was being practical by reading mostly non-fiction. I was learning things! Accumulating facts! Becoming informed about psychology, economics, history, science. But the conversations that lingered were with people who read novels. They have a different kind of intelligence, more contextual and subtle. They understand human nature in a way that knowing cold facts about cognitive biases never quite captures.

Self-help books operate under the assumption that wisdom can be systematized and imparted through instruction. But wisdom resists systematization. It's pattern recognition across too many variables to count. It's knowing when rules apply and when they don't. Fiction trains this capacity by forcing you to navigate moral and social complexity without clear answers. There's no "key takeaways" section because life doesn't have key takeaways.

I think about the bookshelf in my office. There are some non-fiction books I'm glad I read once. There are novels I've read five times and will read again. The novels keep yielding new insights because they contain genuine complexity, instead of a cherry-picked selection of simplified models. CS Lewis understood more about courage, friendship, temptation, and sacrifice than the combined authors of every book in the business section. He understood it the way you can only understand something when you build a world from scratch and watch how different souls navigate it.

Maybe students read fiction because they're not yet corrupted by the need to seem informed. Maybe the extremely smart read fiction because they've realized that seeming informed is worthless compared to actual understanding. And maybe the rest of us are stuck in the self-help aisle, hoping that some author has figured out the trick to living that we can learn in twelve chapters.

But Tolkien already told us the trick: the way is shut, and you have to walk it yourself.

(No amount of non-fiction can walk it for you.)



Read the whole story
hoz
10 days ago
reply
Share this story
Delete

Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025)

1 Share


Today's links



A Zimbabwean one hundred trillion dollar bill; the bill's iconography have been replaced with the glaring red eye of HAL 9000 from Stanley Kubrick's '2001: A Space Odyssey' and a stylized, engraving-style portrait of Sam Altman.

The real (economic) AI apocalypse is nigh (permalink)

Like you, I'm sick to the back teeth of talking about AI. Like you, I keep getting dragged into discussions of AI. Unlike you‡, I spent the summer writing a book about why I'm sick of writing about AI⹋, which Farrar, Straus and Giroux will publish in 2026.

‡probably

⹋"The Reverse Centaur's Guide to AI"

A week ago, I turned that book into a speech, which I delivered as the annual Nordlander Memorial Lecture at Cornell, where I'm an AD White Professor-at-Large. This was my first-ever speech about AI and I wasn't sure how it would go over, but thankfully, it went great and sparked a lively Q&A. One of those questions came from a young man who said something like "So, you're saying a third of the stock market is tied up in seven AI companies that have no way to become profitable and that this is a bubble that's going to burst and take the whole economy with it?"

I said, "Yes, that's right."

He said, "OK, but what can we do about that?"

So I re-iterated the book's thesis: that the AI bubble is driven by monopolists who've conquered their markets and have no more growth potential, who are desperate to convince investors that they can continue to grow by moving into some other sector, e.g. "pivot to video," crypto, blockchain, NFTs, AI, and now "super-intelligence." Further: the topline growth that AI companies are selling comes from replacing most workers with AI, and re-tasking the surviving workers as AI babysitters ("humans in the loop"), which won't work. Finally: AI cannot do your job, but an AI salesman can 100% convince your boss to fire you and replace you with an AI that can't do your job, and when the bubble bursts, the money-hemorrhaging "foundation models" will be shut off and we'll lose the AI that can't do your job, and you will be long gone, retrained or retired or "discouraged" and out of the labor market, and no one will do your job. AI is the asbestos we are shoveling into the walls of our society and our descendants will be digging it out for generations:

https://pluralistic.net/2025/05/27/rancid-vibe-coding/#class-war

The only thing (I said) that we can do about this is to puncture the AI bubble as soon as possible, to halt this before it progresses any further and to head off the accumulation of social and economic debt. To do that, we have to take aim at the material basis for the AI bubble (creating a growth story by claiming that defective AI can do your job).

"OK," the young man said, "but what can we do about the crash?" He was clearly very worried.

"I don't think there's anything we can do about that. I think it's already locked in. I mean, maybe if we had a different government, they'd fund a jobs guarantee to pull us out of it, but I don't think Trump'll do that, so –"

"But what can we do?"

We went through a few rounds of this, with this poor kid just repeating the same question in different tones of voice, like an acting coach demonstrating the five stages of grieving using nothing but inflection. It was an uncomfortable moment, and there was some decidedly nervous chuckling around the room as we pondered the coming AI (economic) apocalypse, and the fate of this kid graduating with mid-six-figure debts into an economy of ashes and rubble.

I firmly believe the (economic) AI apocalypse is coming. These companies are not profitable. They can't be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people's money and then lighting it on fire. Eventually those other people are going to want to see a return on their investment, and when they don't get it, they will halt the flow of billions of dollars. Anything that can't go on forever eventually stops.

This isn't like the early days of the web, or Amazon, or any of those other big winners that lost money before becoming profitable. Those were all propositions with excellent "unit economics" – they got cheaper with every successive technological generation, and the more customers they added, the more profitable they became. AI companies have – in the memorable phraseology of Ed Zitron – "dogshit unit-economics." Each generation of AI has been vastly more expensive than the previous one, and each new AI customer makes the AI companies lose more money:

https://pluralistic.net/2025/06/30/accounting-gaffs/#artificial-income

This week, no less than the Wall Street Journal published a lengthy, well-reported story (by Eliot Brown and Robbie Whelan) on the catastrophic finances of AI companies:

https://www.wsj.com/tech/ai/ai-bubble-building-spree-55ee6128

The WSJ writers compare the AI bubble to other bubbles, like Worldcom's fraud-soaked fiber optic bonanza (which saw the company's CEO sent to prison, where he eventually died), and conclude that the AI bubble is vastly larger than any other bubble in recent history.

The data-center buildout has genuinely absurd finances – there are data-center companies that are collateralizing their loans by staking their giant Nvidia GPUs as collateral. This is wild: there's pretty much nothing (apart from fresh-caught fish) that loses its value faster than silicon chips. That goes triple for GPUs used in AI data-centers, where it's normal for tens of thousands of chips to burn out over a single, 54-day training run:

https://techblog.comsoc.org/2024/11/25/superclusters-of-nvidia-gpu-ai-chips-combined-with-end-to-end-network-platforms-to-create-next-generation-data-centers/

Talk about sweating your assets!

That barely scratches the surface of the funny accounting in the AI bubble. Microsoft "invests" in Openai by giving the company free access to its servers. Openai reports this as a ten billion dollar investment, then redeems these "tokens" at Microsoft's data-centers. Microsoft then books this as ten billion in revenue.

That's par for the course in AI, where it's normal for Nvidia to "invest" tens of billions in a data-center company, which then spends that investment buying Nvidia chips. It's the same chunk of money is being energetically passed back and forth between these closely related companies, all of which claim it as investment, as an asset, or as revenue (or all three).

The Journal quotes David Cahn, a VC from Sequoia, who says that for AI companies to become profitable, they would have to sell us $800 billion worth of services over the life of today's data centers and GPUs. Not only is that a very large number – it's also a very short time. AI bosses themselves will tell you that these data centers and GPUs will be obsolete practically from the moment they start operating. Mark Zuckerberg says he's prepared to waste "a couple hundred billion dollars" on misspent AI investments:

https://www.businessinsider.com/mark-zuckerberg-meta-risk-billions-miss-superintelligence-ai-bubble-2025-9

Bain & Co says that the only way to make today's AI investments profitable is for the sector to bring in $2 trillion by 2030 (the Journal notes that this is more than the combined revenue of Amazon, Google, Microsoft, Apple Nvidia and Meta):

https://www.bain.com/about/media-center/press-releases/20252/$2-trillion-in-new-revenue-needed-to-fund-ais-scaling-trend—bain–companys-6th-annual-global-technology-report/

How much money is the AI industry making? Morgan Stanley says it's $45b/year. But that $45b is based on the AI industry's own exceedingly cooked books, where annual revenue is actually annualized revenue, an accounting scam whereby a company chooses its best single revenue month and multiplies it by 12, even if that month is a wild outlier:

https://www.wheresyoured.at/the-haters-gui/

Industry darlings like Coreweave (a middleman that rents out data-centers) are sitting on massive piles of debt, secured by short-term deals with tech companies that run out long before the debts can be repaid. If they can't find a bunch of new clients in a couple short years, they will default and collapse.

Today's AI bubble has absorbed more of the country's wealth and represents more of its economic activity than historic nation-shattering bubbles, like the 19th century UK rail bubble. A much-discussed MIT paper found that 95% of companies that had tried AI had either nothing to show for it, or experienced a loss:

https://www.technologyreview.com/2019/01/25/1436/we-analyzed-16625-papers-to-figure-out-where-ai-is-headed-next/

A less well-known U Chicago paper finds that AI has "no significant impact on workers’ earnings, recorded hours, or wages":

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933

Anything that can't go on forever eventually stops. Trump might bail out the AI companies, but for how long? They are incinerating money faster than practically any other human endeavor in history, with precious little to show for it.

During my stay at Cornell, one of the people responsible for the university's AI strategy asked me what I thought the university should be doing about AI. I told them that they should be planning to absorb the productive residue that will be left behind after the bubble bursts:

https://locusmag.com/feature/commentary-cory-doctorow-what-kind-of-bubble-is-ai/

Plan for a future where you can buy GPUs for ten cents on the dollar, where there's a buyer's market for hiring skilled applied statisticians, and where there's a ton of extremely promising open source models that have barely been optimized and have vast potential for improvement.

There's plenty of useful things you can do with AI. But AI is (as Princeton's Arvind Narayanan and Sayash Kapoor, authors of AI Snake Oil put it), a normal technology:

https://knightcolumbia.org/content/ai-as-normal-technology

That doesn't mean "nothing to see here, move on." It means that AI isn't the bow-wave of "impending superintelligence." Nor is it going to deliver "humanlike intelligence."

It's a grab-bag of useful (sometimes very useful) tools that can sometimes make workers' lives better, when workers get to decide how and when they're used.

The most important thing about AI isn't its technical capabilities or limitations. The most important thing is the investor story and the ensuing mania that has teed up an economical catastrophe that will harm hundreds of millions or even billions of people. AI isn't going to wake up, become superintelligent and turn you into paperclips – but rich people with AI investor psychosis are almost certainly going to make you much, much poorer.

(Image: TechCrunch, CC BY 2.0; Cryteria, CC BY 3.0; modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Financial Times: WIPO’s webcaster treaty is a disaster https://www.ft.com/content/441306be-2eb6-11da-9aed-00000e2511c8

#15yrsago Google’s autocomplete blacklist https://www.2600.com/googleblacklist/

#15yrsago FBI ignores DoJ report, raids activists, arrests Time Person of the Year https://www.democracynow.org/2010/9/27/fbi_raids_homes_of_anti_war

#15yrsago Meta-textual analysis of mainstream science reporting https://www.theguardian.com/science/the-lay-scientist/2010/sep/24/1

#15yrsago Lockheed Martin sign prohibits sketching and “gathering information” https://www.flickr.com/photos/jef/5028187145/

#5yrsago Ransomware for coffee makers https://pluralistic.net/2020/09/27/junky-styling/#java-script

#5yrsago The joys of tailoring https://pluralistic.net/2020/09/27/junky-styling/#inseams

#1yrago Return to office and dying on the job https://pluralistic.net/2024/09/27/sharpen-your-blades-boys/#disciplinary-technology


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025

  • "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
    https://us.macmillan.com/books/9780374619329/enshittification/

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026



Colophon (permalink)

Today's top sources: James Boyle (https://www.thepublicdomain.org/).

Currently writing:

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
hoz
10 days ago
reply
Share this story
Delete

Digital Threat Modeling Under Authoritarianism

1 Comment and 2 Shares

Today’s world requires us to make complex and nuanced decisions about our digital security. Evaluating when to use a secure messaging app like Signal or WhatsApp, which passwords to store on your smartphone, or what to share on social media requires us to assess risks and make judgments accordingly. Arriving at any conclusion is an exercise in threat modeling.

In security, threat modeling is the process of determining what security measures make sense in your particular situation. It’s a way to think about potential risks, possible defenses, and the costs of both. It’s how experts avoid being distracted by irrelevant risks or overburdened by undue costs.

We threat model all the time. We might decide to walk down one street instead of another, or use an internet VPN when browsing dubious sites. Perhaps we understand the risks in detail, but more likely we are relying on intuition or some trusted authority. But in the U.S. and elsewhere, the average person’s threat model is changing—specifically involving how we protect our personal information. Previously, most concern centered on corporate surveillance; companies like Google and Facebook engaging in digital surveillance to maximize their profit. Increasingly, however, many people are worried about government surveillance and how the government could weaponize personal data.

Since the beginning of this year, the Trump administration’s actions in this area have raised alarm bells: The Department of Government Efficiency (DOGE) took data from federal agencies, Palantir combined disparate streams of government data into a single system, and Immigration and Customs Enforcement (ICE) used social media posts as a reason to deny someone entry into the U.S.

These threats, and others posed by a techno-authoritarian regime, are vastly different from those presented by a corporate monopolistic regime—and different yet again in a society where both are working together. Contending with these new threats requires a different approach to personal digital devices, cloud services, social media, and data in general.

What Data Does the Government Already Have?

For years, most public attention has centered on the risks of tech companies gathering behavioral data. This is an enormous amount of data, generally used to predict and influence consumers’ future behavior—rather than as a means of uncovering our past. Although commercial data is highly intimate—such as knowledge of your precise location over the course of a year, or the contents of every Facebook post you have ever created—it’s not the same thing as tax returns, police records, unemployment insurance applications, or medical history.

The U.S. government holds extensive data about everyone living inside its borders, some of it very sensitive—and there’s not much that can be done about it. This information consists largely of facts that people are legally obligated to tell the government. The IRS has a lot of very sensitive data about personal finances. The Treasury Department has data about any money received from the government. The Office of Personnel Management has an enormous amount of detailed information about government employees—including the very personal form required to get a security clearance. The Census Bureau possesses vast data about everyone living in the U.S., including, for example, a database of real estate ownership in the country. The Department of Defense and the Bureau of Veterans Affairs have data about present and former members of the military, the Department of Homeland Security has travel information, and various agencies possess health records. And so on.

It is safe to assume that the government has—or will soon have—access to all of this government data. This sounds like a tautology, but in the past, the U.S. government largely followed the many laws limiting how those databases were used, especially regarding how they were shared, combined, and correlated. Under the second Trump administration, this no longer seems to be the case.

Augmenting Government Data with Corporate Data

The mechanisms of corporate surveillance haven’t gone away. Compute technology is constantly spying on its users—and that data is being used to influence us. Companies like Google and Meta are vast surveillance machines, and they use that data to fuel advertising. A smartphone is a portable surveillance device, constantly recording things like location and communication. Cars, and many other Internet of Things devices, do the same. Credit card companies, health insurers, internet retailers, and social media sites all have detailed data about you—and there is a vast industry that buys and sells this intimate data.

This isn’t news. What’s different in a techno-authoritarian regime is that this data is also shared with the government, either as a paid service or as demanded by local law. Amazon shares Ring doorbell data with the police. Flock, a company that collects license plate data from cars around the country, shares data with the police as well. And just as Chinese corporations share user data with the government and companies like Verizon shared calling records with the National Security Agency (NSA) after the Sept. 11 terrorist attacks, an authoritarian government will use this data as well.

Personal Targeting Using Data

The government has vast capabilities for targeted surveillance, both technically and legally. If a high-level figure is targeted by name, it is almost certain that the government can access their data. The government will use its investigatory powers to the fullest: It will go through government data, remotely hack phones and computers, spy on communications, and raid a home. It will compel third parties, like banks, cell providers, email providers, cloud storage services, and social media companies, to turn over data. To the extent those companies keep backups, the government will even be able to obtain deleted data.

This data can be used for prosecution—possibly selectively. This has been made evident in recent weeks, as the Trump administration personally targeted perceived enemies for “mortgage fraud.” This was a clear example of weaponization of data. Given all the data the government requires people to divulge, there will be something there to prosecute.

Although alarming, this sort of targeted attack doesn’t scale. As vast as the government’s information is and as powerful as its capabilities are, they are not infinite. They can be deployed against only a limited number of people. And most people will never be that high on the priorities list.

The Risks of Mass Surveillance

Mass surveillance is surveillance without specific targets. For most people, this is where the primary risks lie. Even if we’re not targeted by name, personal data could raise red flags, drawing unwanted scrutiny.

The risks here are twofold. First, mass surveillance could be used to single out people to harass or arrest: when they cross the border, show up at immigration hearings, attend a protest, are stopped by the police for speeding, or just as they’re living their normal lives. Second, mass surveillance could be used to threaten or blackmail. In the first case, the government is using that database to find a plausible excuse for its actions. In the second, it is looking for an actual infraction that it could selectively prosecute—or not.

Mitigating these risks is difficult, because it would require not interacting with either the government or corporations in everyday life—and living in the woods without any electronics isn’t realistic for most of us. Additionally, this strategy protects only future information; it does nothing to protect the information generated in the past. That said, going back and scrubbing social media accounts and cloud storage does have some value. Whether it’s right for you depends on your personal situation.

Opportunistic Use of Data

Beyond data given to third parties—either corporations or the government—there is also data users keep in their possession.This data may be stored on personal devices such as computers and phones or, more likely today, in some cloud service and accessible from those devices. Here, the risks are different: Some authority could confiscate your device and look through it.

This is not just speculative. There are many stories of ICE agents examining people’s phones and computers when they attempt to enter the U.S.: their emails, contact lists, documents, photos, browser history, and social media posts.

There are several different defenses you can deploy, presented from least to most extreme. First, you can scrub devices of potentially incriminating information, either as a matter of course or before entering a higher-risk situation. Second, you could consider deleting—even temporarily—social media and other apps so that someone with access to a device doesn’t get access to those accounts—this includes your contacts list. If a phone is swept up in a government raid, your contacts become their next targets.

Third, you could choose not to carry your device with you at all, opting instead for a burner phone without contacts, email access, and accounts, or go electronics-free entirely. This may sound extreme—and getting it right is hard—but I know many people today who have stripped-down computers and sanitized phones for international travel. At the same time, there are also stories of people being denied entry to the U.S. because they are carrying what is obviously a burner phone—or no phone at all.

Encryption Isn’t a Magic Bullet—But Use It Anyway

Encryption protects your data while it’s not being used, and your devices when they’re turned off. This doesn’t help if a border agent forces you to turn on your phone and computer. And it doesn’t protect metadata, which needs to be unencrypted for the system to function. This metadata can be extremely valuable. For example, Signal, WhatsApp, and iMessage all encrypt the contents of your text messages—the data—but information about who you are texting and when must remain unencrypted.

Also, if the NSA wants access to someone’s phone, it can get it. Encryption is no help against that sort of sophisticated targeted attack. But, again, most of us aren’t that important and even the NSA can target only so many people. What encryption safeguards against is mass surveillance.

I recommend Signal for text messages above all other apps. But if you are in a country where having Signal on a device is in itself incriminating, then use WhatsApp. Signal is better, but everyone has WhatsApp installed on their phones, so it doesn’t raise the same suspicion. Also, it’s a no-brainer to turn on your computer’s built-in encryption: BitLocker for Windows and FileVault for Macs.

On the subject of data and metadata, it’s worth noting that data poisoning doesn’t help nearly as much as you might think. That is, it doesn’t do much good to add hundreds of random strangers to an address book or bogus internet searches to a browser history to hide the real ones. Modern analysis tools can see through all of that.

Shifting Risks of Decentralization

This notion of individual targeting, and the inability of the government to do that at scale, starts to fail as the authoritarian system becomes more decentralized. After all, if repression comes from the top, it affects only senior government officials and people who people in power personally dislike. If it comes from the bottom, it affects everybody. But decentralization looks much like the events playing out with ICE harassing, detaining, and disappearing people—everyone has to fear it.

This can go much further. Imagine there is a government official assigned to your neighborhood, or your block, or your apartment building. It’s worth that person’s time to scrutinize everybody’s social media posts, email, and chat logs. For anyone in that situation, limiting what you do online is the only defense.

Being Innocent Won’t Protect You

This is vital to understand. Surveillance systems and sorting algorithms make mistakes. This is apparent in the fact that we are routinely served advertisements for products that don’t interest us at all. Those mistakes are relatively harmless—who cares about a poorly targeted ad?—but a similar mistake at an immigration hearing can get someone deported.

An authoritarian government doesn’t care. Mistakes are a feature and not a bug of authoritarian surveillance. If ICE targets only people it can go after legally, then everyone knows whether or not they need to fear ICE. If ICE occasionally makes mistakes by arresting Americans and deporting innocents, then everyone has to fear it. This is by design.

Effective Opposition Requires Being Online

For most people, phones are an essential part of daily life. If you leave yours at home when you attend a protest, you won’t be able to film police violence. Or coordinate with your friends and figure out where to meet. Or use a navigation app to get to the protest in the first place.

Threat modeling is all about trade-offs. Understanding yours depends not only on the technology and its capabilities but also on your personal goals. Are you trying to keep your head down and survive—or get out? Are you wanting to protest legally? Are you doing more, maybe throwing sand into the gears of an authoritarian government, or even engaging in active resistance? The more you are doing, the more technology you need—and the more technology will be used against you. There are no simple answers, only choices.

Read the whole story
hoz
10 days ago
reply
Share this story
Delete
1 public comment
GaryBIshop
60 days ago
reply
Mistakes are a feature. Great insight.