from the frontlines of the netwar
meta, the tiktok ban, and a continued search for the truth online

On Friday, the Supreme Court heard oral arguments on whether the proposed TikTok ban violates American’s First Amendment rights.
In a rare example of bipartisan support, Congress agreed last year that Tik Tok - a subdivision of ByteDance, a Chinese-based company - posed a national security threat. ByteDance must divest TikTok to a U.S. company or it is no longer allowed to operate in the country. After a series of appeals, the ban is set to take effect next Sunday, pending a last-minute Supreme Court decision.
The closer we get to January 19, the more mixed the reactions on my feed become. Black content creators have me kicking up my feet in laughter as they pantomime various iterations of “falling out,” frantically waving paper fans and showcasing parodied versions of ‘praise dances,’ seemingly accepting the end of an era while having some fun.
Enable 3rd party cookies or use another browser
But there are also people - those who both make a living and enjoy watching content - who are angry and veer more towards the conspiratorial. The general sentiment among this crowd is that the TikTok ban is an orchestrated lobbying effort from Meta to claw its way back to relevance. Instagram became popular so Facebook bought it. Vine rose in popularity so Instagram introduced videos. Snapchat came along and then there were Instagram Stories.
In each instance, Instagram (Meta) was able to maintain its dominance by buying or copying competitors - that is until TikTok. Instagram Reels are viewed as inherently inferior, because they lack some of the basic UX functionality found on TikTok (such as the ability to pause a video) and because the personalized content it pushes is not as tailored. The For You Page really does seem like it is for you.
As a TikTok creator, I have a lot to lose should the ban go forward. I’ve spent the last seven years of my life creating and generating content on YouTube, various blogs, and as a bookstagram influencer. TikTok is far and away my favorite platform because it has allowed me to connect with a specific audience, which has led to friendships and networking opportunities in the media space I otherwise likely wouldn’t have found. However, as a consultant, as someone who analyzes power in relation to a specific topic or within an industry, I also can’t stop myself from asking a series of questions.
Who wins (and loses) by owning and controlling a platform used by 170 million Americans? Is there any merit in the national security claims?
Given people’s fears and lack of enthusiasm about a potential forced return to Instagram, I’ve begun seeing more and more videos about how people “would rather have China own their data than Mark Zuckerberg.” This is absurd for several reasons.
As much as there is the very real question of what companies do with our data, it feels like an intentional false equivalency. China, and other foreign countries, do have political motivations for sowing discourse or general mistrust, both in perceptions of our government and in our opinions of each other. And what better way to understand the cultural dynamics, simmering racial tensions, and regional nuances of our country to further these efforts than creating a widely beloved social media app that can collect that data?
In the Information Age, governments no longer need to go through lengthy and costly efforts to invest in traditional espionage efforts. Social media provides the perfect opportunity to learn more about your desired target and how to influence them in ways that aren’t obvious. China doesn’t need to directly create propaganda; with the right algorithm, it can rely on Americans to do the work for them.
Modern warfare is no longer fought on the frontlines, with drones and heavy artillery. It is also online.
Researchers predicted this new form of war as far back as the 1970s, but it didn’t coalesce around a term until 1993 after John Arquilla and David Ronfeldt’s “Cyberwar is Coming!” They predicted that future conflicts will resemble “a chess game where you see the entire board, but your opponent sees only its own pieces” and go on to define two types of potential conflict: netwar and cyberwar.
Cyberwar speaks more to the operations and tactics a military would deploy. Think of ‘smart weapons,’ like AI drones, or intentionally damaging enemy communication. This has evolved over the last thirty years to what we see today in Ukraine and Gaza. What’s more interesting, and speaks to this present moment when thinking about TikTok, is this idea of a netwar.
A netwar is about “trying to disrupt, damage, or modify what a target population ‘knows’ or thinks it knows about itself and the world around it.”
Netwars are all about controlling the narrative around information, how it is distributed, and whether the public believes what it reads, hears, or watches. In this sense, we are already engaged in a series of netwars with the 2016 election marking a new, public phase that has only continued to escalate over the last eight years.
Arquilla and Ronfeldt question whether traditional nation-states will even have a role in future iterations of netwar. One need not work on behalf of a government to engage in netwar. In Transformation of War, Martin van Creveld argues that: “In the future war, war will not be waged by armies but by groups whom today we call terrorists, guerrillas, bandits and robbers, but who will undoubtedly hit on more formal titles to describe themselves.” It’s why I have such great skepticism about Elon Musk and his motivations around buying Twitter, turning it into his soapbox that uplifts white supremacist content and influences politics both at home and abroad.

Social media has democratized information in that anyone can share a story by uploading text, an image, or video to the internet. This was the great promise, how anyone with a voice who felt like they had something to say now didn’t need to rely on the validation of ‘mainstream media’ to communicate with the masses, but could instead bring their own stories to life. However, deputizing anyone with an iPhone and WiFi as a trusted news source instead of prioritizing those with the resources required to verify a story presents this fundamental question: who gets to decide what’s true?
Welcome to the post-truth era.
The term “post-truth” was first used in a 1992 essay by Steve Tesich - a Serbian-American playwright- to reflect on how quickly the public seems to move on during seemingly big, important events. We would rather forget, or simply not know, rather than sit with what’s right in front of us. He coined ‘post-truth’ to capture our inability to actively grapple with events like Watergate (or more recently, electing a felon to the highest office) and what it reveals about our ability to stomach the uncomfortable reality found in the truth.
We are rapidly becoming prototypes of a people that totalitarian monsters could only drool about in their dreams. All the dictators up to now have had to work hard at suppressing the truth. We, by our actions, are saying that this is no longer necessary, that we have acquired a spiritual mechanism that can denude truth of any significance. In a very fundamental way we, as a free people, have freely decided that we want to live in some post-truth world.
- Steve Teisch
Twenty-five years later, post-truth - used to “relate or denote circumstances in which objective facts are less influential in shaping public opinion that appeals to emotion and personal belief” - became the Word of the Year. It feels fitting given 2016 was a period marked by different levels of interference in both Brexit and the U.S. elections. There was Cambridge Analytica and its use of Facebook profiles to develop software to predict, redirect, and influence American’s perceptions of Trump. Russian bots and agents engaged in coordinated attacks of misinformation, as well as leaking classified documents. Depending on your political beliefs, this was either shocking and viewed as credible information or all part of a larger plot to ‘drain the swamp.’
A year later, in 2017, Pew Research surveyed over 1,110 academics, researchers, journalists, and others to understand whether this level of misinformation was a blip or a harbinger of what was to come. Fifty-one percent of respondents believed our media landscape and understanding of the issue would not improve, and now we are living with some of their predictions.
Bad actors will increasingly use “new digital tools to take advantage of humans’ inbred preference for comfort and convenience” by creating online echo chambers.
This is known as ‘homophily,’ or a tendency to listen to and associate with people like yourself while excluding the voices of others you don’t agree with. It’s how feeds tailored to either a liberal or conservative audience can feel like they’re from two different realities. We are more likely to seek out things that reinforce our existing worldview, which is heightened by the attention economy, as companies tweak algorithms to get us to stay online just a little longer each time. It’s part of why TikTok is so incredibly skilled at keeping us engaged. Its algorithm was at the heart of the oral arguments on Friday, with Justices questioning whether this was an issue of free speech and more about corporate divestment.
Theoretically, TikTok could continue operating if it were sold to a U.S. company or made its algorithm open source. It begs the question - nearly six years later - why no other company, including Meta, has been unable to replicate the precision and fluidity of their algorithm. What’s the secret sauce and when does it become a matter of natural security?
Pessimists in the study also pointed to this belief that our brains are not capable of handling this much technological change at once. Think of the rise in phone scams, no longer relegated to spam bots but instead leveraging voice technology to sound like loved ones, begging you for money for a family member’s bond money. As AI continues to improve, it will become harder to tell what is human- versus machine-generated content.
Meta is already leveraging AI to create fake influencers. While it was shut down days later, it raises doubts about what we know or perceive to know about the world around us. Are we talking to a real person, a foreign agent, or a bot? The fact that we increasingly can’t answer this question has led some to speculate that the internet will become a bot-filled, AI generated slop wasteland known as the ‘dead internet theory.’ It’s just bots, all the way down.
All of this leads to the decay and erosion of public trust, providing the perfect canvas for any bad faith actor, whether that is a government or corporation, to continue exploiting our fears for their gain.
Earlier this week, Meta announced its recent decision to roll back its fact-checking team from independent, third-party journalists in favor of community notes, where the “community [can] decide when posts are potentially misleading and need more context.” Other decisions include rethinking what constitutes hate speech and, as of Friday, cutting the company’s DEI initiatives. Given that Meta created a Black, queer AI influencer who has self-reported that she was created by a team of almost entirely white men (with not a single Black person consulted at all), this is a troublesome direction for several reasons.
Meta’s decision was clearly politically motivated, though I’m not surprised. While I’m not a fan of the mealy-mouthed man Trump viewed him as a political adversary, even going to say, “We are watching him closely, and if he does anything illegal this time he will spend the rest of his life in prison — as will others who cheat in the 2024 Presidential Election.” As Zuckerberg works to protect himself and his self-interests, there are open questions about the future of online spaces and how that will potentially translate to the real world. What if ‘the community’ decides the Holocaust or slavery weren’t real?
As it becomes harder to discern what is real versus fake, what is AI versus reality, we will begin to see a digital divide between people who place a premium on trusted, verified information and those who either can’t, due to the cost, or won’t, because it does not fit their political narrative.
Enable 3rd party cookies or use another browser
I think it’s naive to think an app is just an app. Someone is always winning; it often comes at the expense of our data. As technology continues to evolve, it will also likely become a tool for covert warfare and manipulation by both foreign actors and corporations.
While I’m generally a skeptic in the federal government, outside evidence and theory appear to support claims that TikTok does pose a national security threat. For now, I will continue to use TikTok given these risks to further in-person connection and events, but should it go dark, I’m not exactly racing to find a new site outside of Substack. Instead, you can find me waiting to see what new front in the netwar opens next.