Andrew Harnik/Getty Images
- TikTok acted up over the weekend. TikTok says that’s because of a data center outage.
- Still, some people believe the US version of TikTok is now censoring anti-Trump views.
- They’re probably wrong. But TikTok, like all other platforms, runs on a black-box algorithm. It’s nearly impossible to find out how it works.
Say a Trump-friendly group of owners bought TikTok, and wanted to change the way TikTok worked — so it showed more stuff that was Trump-friendly, and less that wasn’t.
How would they go about doing that?
I’m no engineer, but I would think that would be a pretty difficult project. You’d need to do it pretty subtly, over an extended time period, so people wouldn’t notice it right away. Right?
Wrong, say some people who use TikTok. They point to Sunday, when TikTok stopped showing anything related to the death of Minneapolis protester Alex Pretti. They also note that at one point, TikTok prohibited users from using the word “Epstein” in direct messages to each other. And some TikTok creators insist that anti-Trump posts they created weren’t getting the same distribution they used to.
And all of this, they say, is related to the fact that late last week, a group of companies from the US and Abu Dhabi took control of (some of) TikTok’s US operations, via a deal brokered by the Trump administration.
The new US TikTok says that’s not the case. It says its entire operation was screwed up Sunday because one of Oracle’s data centers had an outage. (TikTok didn’t formally identify Oracle in its statements, but on Tuesday night, Oracle raised its hand and acknowledged that it was the “US Data Center partner” TikTok was referring to. Oracle now owns 15% of the new US TikTok.) It also says the Epstein direct message issue was a bug it is fixing, though it hasn’t described how the bug came to be.
And all of that seems pretty plausible to me.
On Saturday, my feed was full of posts about the latest killing of a protester in Minneapolis by federal agents. And on Sunday, those were all gone — but so was everything else I normally saw on my TikTok, like Liverpool soccer highlights and copyright-violating clips of action movies and TV shows. Instead, I got clip after clip of videos extolling the virtues of Corvettes.
But on Monday, everything was back to normal: My TikTok was full of rage-inducing clips about Alex Pretti’s death, including multiple ones where creators urged viewers to learn how to use guns to defend themselves from the government. And while I don’t normally use TikTok’s DM feature, or send messages about Jeffrey Epstein, I was able to do that without a problem on Tuesday.
All of which suggests that TikTok’s answer — we had to unplug the machine and plug it back in, basically — sounds pretty plausible to me. And much more plausible than, “The new US TikTok has immediately started censoring anti-Trump views.”
But that’s the theory that a bunch of high-profile people, including California Gov. Gavin Newsom and Sen. Chris Murphy, are running with.
And while some of the people complaining may have a specific motivation for their complaints — both Newsom and Murphy are consistent critics of the Trump administration, for instance — it’s easy enough to understand why people might think TikTok really is up to something.
For starters, one of the reasons US TikTok is no longer run entirely by ByteDance, its original owner, is because US lawmakers told us they were worried that China might influence what TikTok users saw in America. But now US TikTok is owned by Trump allies — most notably Oracle founder Larry Ellison — and Trump has made it clear that he wants media and tech companies to promote things he likes and demote things he doesn’t. So it’s not outrageous to believe that TikTok’s new bosses might put their thumb on the scale for Trump.
But the bigger issue is that it’s nearly impossible, from the outside, to understand why a platform’s algorithm behaves the way it does. You can certainly make informed guesses, and you might be right. But you’re still guessing, and your guessing is often informed by the way you think the platform should operate.
Conservatives, for instance, complained for years about the way Twitter and Facebook treated them — up until Elon Musk bought Twitter in 2022 and Mark Zuckerberg pivoted to Trump in 2025. Digital publishers are consumed with the vagaries of Google Discover — the platform many of them are now dependent on for traffic referrals. Now, AI engines are the new target for bias claims.
But proving that something is up — one way or another — is just about impossible in any systemic way: Yes, maybe your latest post isn’t getting the views your other posts used to get. But does that mean the platform is censoring you? Or just swapping out your free content for someone else’s free content it thinks will perform better?
Even more confounding for us outsiders: Sometimes we really do find out that a human being — not software — made a decision to push something out of sight on a platform they run. In 2020, Twitter, then run by Jack Dorsey, hid a New York Post story about Hunter Biden’s laptop because the company believed it might be part of a disinformation plot against Joe Biden. (Twitter later reversed itself, and Dorsey apologized publicly.) Musk, after buying Twitter, reportedly told his engineers to boost his posts on his own platform.
And we most certainly live in an era where trust in every institution is frayed. So when a big platform stops showing you what you want to see, there are plenty of reasons to think that something’s afoot. So maybe one day we will find out that the new US TikTok really is trying to promote Trump and tamp down his foes. I don’t think that’s true now — but how can I tell, for sure?
Â