Facts??? What Facts? Who Cares?

How the Virality Project Threatens Our Freedom

Analysis by Dr. Joseph MercolaFact Checked 

STORY AT-A-GLANCE

  • We now have proof that the U.S. Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) partnered with a censorship consortium called the Election Integrity Partnership (EIP) to illegally censor Americans
  • During the 2020 election cycle, the EIP and CISA worked with the State Department’s Global Engagement Center (GEC) and the DHS-backed Elections Infrastructure Information Sharing and Analysis Center (ISAC) to police political wrongthink on social media
  • In February 2021, the EIP rebranded itself as the Virality Project, and went on to censor COVID-19 narratives on behalf of the government, even when they knew it was true
  • The Virality Project targeted first-hand accounts of COVID jab injuries to prevent vaccine hesitancy, and posts that expressed fears about vaccine passports because being against vaccine passports was a “gateway to being anti-vax.” They also censored jokes and satirical memes on the basis that they might “exacerbate distrust” in public health officials, and made asking questions a punishable event because questioning is “commonly used by spreaders of misinformation”
  • As bad as things are, they’re about to get a whole lot worse unless Congress puts a stop to it. In the last three years, the U.S. government has granted more than 500 contracts and/or grants aimed at tackling “misinformation”
  • The Department of Defense is also focused on research involving AI and tech that can monitor internet conversations and deploy countermeasures before wrongthink goes viral. Congress must defund all of these programs, as well as any agency department or team involved in censoring Americans

As detailed in “Propaganda and Censorship Dominate the Information War,” we now have proof, courtesy of the Twitter Files, that the U.S. Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) partnered with a censorship consortium called the Election Integrity Partnership (EIP) to censor Americans.1

In an Atlantic Council interview, EIP head Alex Stamos also admitted that the partnership between the EIP and the DHS was set up to outsource censorship that the government could not do due to “lack of legal authorization.”2

Stamos, a former chief of security at Facebook, is also director of the Stanford Internet Observatory — one of the four organizations that make up the EIP — and is a partner in the cyber consulting firm Krebs Stamos Group together with former CISA director Chris Krebs.

Virality Project Is EIP Rebranded

During the 2020 election cycle, the EIP and CISA worked with the State Department’s Global Engagement Center (GEC) and the DHS-backed Elections Infrastructure Information Sharing and Analysis Center (ISAC) to police political wrongthink on social media. The EIP coordinated the take-down of undesirable content using a real-time chat app that the DHS, EIP and social media companies all share.3

In February 2021, the EIP rebranded itself as the Virality Project, and went on to censor COVID-19 narratives on behalf of the government in the same way the EIP censored election narratives on behalf of the political Left.4

According to independent journalist Matt Taibbi, the Virality Project was essentially a dry run for President Biden’s federal Disinformation Governance Board.5 In fact, the Virality Project proposed a federal “Misinformation and Disinformation Center of Excellence” just one day before President Biden announced the plan for this Orwellian outfit.

Public backlash forced Biden to reconsider, but all that means is that the government chose not to make its unconstitutional censoring of Americans official policy. They’re still doing it through partnerships with the EIP/Virality Project and other third parties.

Virality Project Censored Truth

In a March 20, 2023, report (video above), The Hill host Robby Soave detailed the goals of the Virality Project, which “above all else were to protect the perceived integrity of the federal health bureaucracy, vaccine manufacturers and government vaccine policymakers, and to advance mainstream establishment narratives and interests in general.”

As noted by Soave, the Virality Project frequently pressured social media companies to censor COVID-19-related information and/or label it as “misinformation” — even if the information was true.

“This coalition, which was working with government agencies, NGO’s and the social media companies themselves, took the position that even true information could count as dangerous misinformation if its effect was to encourage a policy that clashed with the expert consensus …

If we still value the First Amendment, we must resist these pernicious calls for censorship. A call that is coming from a sordid coalition of ‘truth czars’ and ideological activists masquerading as fact checkers,” Soave says.

The mere possibility of causing “vaccine skepticism” or “vaccine hesitancy” was enough of a justification to censor information about the deadly COVID shots, for example, even though the information was truthful and required in order to make an informed decision.

This even included true first-hand accounts of serious COVID jab injuries, which could have saved lives had they been allowed to be shared. As noted by Andrew Lowenthal, co-founder of EngageMedia and author at Brownstone Institute:6

“Rather than listening out for safety signals to protect the public, leaders in the ‘anti-disinformation’ field ran cover to protect Big Pharma, smearing and censoring critics.

The moral depravity is astounding and quite possibly criminal … [In] suppressing ‘stories of true vaccine side effects’ the Virality Project put people in danger. Rather than keeping people safe they exposed us to the depredations of Big Pharma.”

Wartime Logic

Best-selling author John Leake7 also commented on the Virality Project’s censoring of truthful information, saying:8

“This reminded me of our recent trip to Australia in which we learned the Australian Therapeutic Goods Administration (TGA) led by Dr. John Skerritt, MD, PhD, made the decision to suppress accurate reports of vaccine-induced myocarditis in young people because such reports could cause ‘vaccine hesitancy.’

As these policymakers and regulators see it, the incidence of grave and fatal side effects are sufficiently rare to warrant censoring ANY reporting of them, as such reporting could cause the greater harm of ‘vaccine hesitancy.’

By their calculus, severe injuries and deaths caused by COVID-19 vaccines are the price we as a society must pay for the purportedly greater number of lives saved by the vaccines.

Never in the history of medicine has this calculus been used to evaluate the benefit of a medical product. Only in a military context — that is, commanders in the field must accept a certain number of casualties in order to achieve the greater benefit of vanquishing the enemy — has this logic been applied.”

No Concerns, Jokes or Questions Allowed

The Virality Project also targeted posts that expressed fears about vaccine passports — because being against vaccine passports was a “gateway to being anti-vax” — and censored jokes and satirical memes on the basis that they might “exacerbate distrust” in those targeted as the butt of the joke.

Dr. Anthony Fauci is one example of a public health official whose reputation was protected in this way. They even made asking questions a punishable event, because asking questions is a tactic “commonly used by spreaders of misinformation.”9

Have You Heard of Pre-Bunking?

The Virality Project also invented “pre-bunking” strategies to “warn” the public about purported misinformation before it had time to spread.

For example, when the Johnson & Johnson COVID jab was temporarily suspended by the U.S. Food and Drug Administration and Centers for Disease Control and Prevention in April 2021, the Virality Project issued a rapid response statement10 saying the number of incidents of rare and severe types of blood clots was “very small,” especially considering the millions of doses given.

They also analyzed the narratives put forth “concerning the J&J suspension within anti-vaccine groups across social media platforms” and in foreign and international media, and how these narratives might affect “vaccine hesitancy,” and proposed strategies to counter efforts to use the suspension as support for anti-COVID jab arguments.

Twitter Files Include Calls to Censor Me

As predicted, the Twitter files also contain correspondence with social media companies relating to yours truly. Taibbi points out the Twitter files “repeatedly show media acting as proxy”11 for the NGOs in the censoring network.

As an example, he posted the email below,12 in which the Financial Times used the shady NGO Center for Countering Digital Hate’s fabricated “Misinformation Dozen” report to pressure Twitter into banning me, Robert F. Kennedy Jr., and the rest on its list.

misinformation dozen

Government Censorship Campaign Is Financed by Taxpayers

As noted by Taibbi in a March 9, 2023, Twitter thread:13

“Well, you say, so what? Why shouldn’t civil society organizations and reporters work together to boycott ‘misinformation’? Isn’t that not just an exercise of free speech, but a particularly enlightened form of it?

The difference is, these campaigns are taxpayer-funded. Though the state is supposed to stay out domestic propaganda, the Aspen Institute, Graphika, the Atlantic Council’s DFRLab, New America, and other ‘anti-disinformation’ labs are receiving huge public awards.

Some NGOs, like the GEC-funded Global Disinformation Index or the DOD-funded NewsGuard, not only seek content moderation but apply subjective ‘risk’ or ‘reliability’ scores to media outlets, which can result in reduction in revenue. Do we want government in this role? …

This is the Censorship-Industrial Complex at its essence: a bureaucracy willing to sacrifice factual truth in service of broader narrative objectives. It’s the opposite of what a free press does …

This, ultimately, is the most serious problem with the Censorship-Industrial Complex. Packaged as a bulwark against lies and falsehood, it is itself often a major source of disinformation, with American taxpayers funding their own estrangement from reality.”

Censorship Darling With a Shady Past

You can learn more about Taibbi’s work on the Twitter files in the video above. In his Twitter Files No. 19 thread, Taibbi also highlights some of the shadier characters within this censorship-industrial complex, such as Renée DiResta, technical research manager at Stanford Internet Observatory (which, again, is part of the EIP and Virality Project):14

“Profiles portray DiResta as a warrior against Russian bots and misinformation, but reporters never inquire about work with DARPA, GEC and other agencies. In the video below … Stamos introduces her as having ‘worked for the CIA.'”

“DiResta has become the public face of the Censorship-Industrial Complex, a name promoted everywhere as an unquestioned authority on truth, fact, and Internet hygiene, even though her former firm, New Knowledge, has been embroiled in two major disinformation scandals …

DiResta’s New Knowledge helped design the Hamilton 68 project exposed in the Twitter files. Although it claimed to track ‘Russian influence,’ Hamilton really followed [Conservative] Americans … Hamilton 68 was funded by the Alliance for Securing Democracy, which in turn was funded by the German Marshall Fund, which in turn is funded in part by — the Department of State.

The far worse scandal was Project Birmingham, in which thousands of fake Russian Twitter accounts were created to follow Alabama Republican Roy Moore in his 2017 race for US Senate. Newspapers reported Russia seemed to take an interest in the race, favoring Moore.

Though at least one reporter for a major American paper was at a meeting in September 2018 when New Knowledge planned the bizarre bot-and-smear campaign, the story didn’t break until December, two days after DiResta gave a report on Russian interference to the Senate …

The incident underscored the extreme danger of the Censorship-Industrial Complex. Without real oversight mechanisms, there is nothing to prevent these super-empowered information vanguards from bending the truth for their own ends.

By way of proof, no major press organization has re-examined the bold claims DiResta/New Knowledge made to the Senate — e.g. that Russian ads ‘reached 126 million people’ in 2016 — while covering up the Hamilton and Alabama frauds.”

US Government Is Building Vast Speech Suppression Web

As bad as things already are, they’re about to get a whole lot worse unless Congress puts a stop to it. In a March 21, 2023, article,15 The Federalist’s senior legal correspondent Margot Cleveland details grants showing the U.S. government is “building a vast surveillance and speech suppression web around every American.”

“Our government is preparing to monitor every word Americans say on the internet — the speech of journalists, politicians, religious organizations, advocacy groups, and even private citizens. Should those conversations conflict with the government’s viewpoint about what is in the best interests of our country and her citizens, that speech will be silenced,” she writes.16

“While the ‘Twitter Files’ offer a glimpse into the government’s efforts to censor disfavored viewpoints, what we have seen is nothing compared to what is planned, as the details of hundreds of federal awards lay bare.

Research by The Federalist reveals our tax dollars are funding the development of artificial intelligence (AI) and machine-learning (ML) technology that will allow the government to easily discover ‘problematic’ speech and track Americans reading or partaking in such conversations.

Then, in partnership with Big Tech, Big Business, and media outlets, the government will ensure the speech is censored, under the guise of combatting ‘misinformation’ and ‘disinformation.'”

In the last three years alone, the federal government has granted more than 500 contracts and/or grants aimed at tackling “misinformation” and “disinformation.” The Department of Defense itself is also focused on research involving AI and ML tech that can monitor internet conversations for objectionable viewpoints and deploy countermeasures before they go viral.

A Catch-22

Unfortunately, many of those who have the greatest power to inform the public about what’s happening, and those with the power to protect us by putting an end to this dystopian nightmare, don’t want to because they have something to gain from it, or believe they do. As noted by Cleveland:17

“The threat is further heightened because those with the power to warn the public and demand the government stop silencing Americans’ speech are complicit.

With Democrats, the legacy media, and many Republicans all in on the government’s efforts to censor misinformation and disinformation, it will be extremely difficult for the public to recognize the risks free speech faces — especially since those trying to sound the alarm have already been falsely branded purveyors of disinformation.

A chance remains, though, that enough ordinary Americans will hear the message before it is too late and demand Congress close the Censorship-Industrial Complex.”

Where Do We Go From Here?

Taibbi, in the video above, says the revelations about the Virality Project tell us two things:18

“One, as Orwellian proof-of-concept, the Virality Project was a smash success. Government, academia, and an oligopoly of would-be corporate competitors organized quickly behind a secret, unified effort to control political messaging.

Two, it accelerated the evolution of digital censorship, moving it from judging truth/untruth to a new, scarier model, openly focused on political narrative at the expense of fact.”

This is deeply problematic and will strangle democracy and end the republic that is the United States if allowed to continue. To quote Lowenthal:19

“Free speech and expression protect us from the most powerful actors on the planet, corporations, the State, and a growing plethora of international bodies. Ultimately, we need radically decentralized social media that is more immune to their capture. Our safety depends on it.”

Decentralizing social media is just one necessary defense tactic though. We must also demand Congress take swift action to defund and dismantle the “censorship-industrial complex” that is using our tax dollars to deceive us and withhold truth. Nothing less will suffice. We can’t invent enough privacy laws to protect us from what’s coming.

For a time, many of us suspected that this massive surveillance and control system was primarily funded and built by private interests, but now we’re finding that government funding is behind much, and perhaps most, of it.

Congress has, for many years, if not decades, approved funding for programs intended to destroy our constitutional rights. Now, they must defund all of them. They must also defund all government agency departments or teams involved in the federal censorship network, and that includes the FBI, CIA and DHS.

from:    https://articles.mercola.com/sites/articles/archive/2023/03/29/how-virality-project-threatens-freedom.aspx?ui=f460707c057231d228aac22d51b97f2a8dcffa7b857ec065e5a5bfbcfab498ac&sd=20211017&cid_source=dnl&cid_medium=email&cid_content=art1ReadMore&cid=20230329_HL2&cid=DM1372268&bid=1758336702

So… You Always Wanted to Be On Camera?

‘Watched the whole time’: China’s surveillance state grows under Xi

Jing Xuan TENG

When Chen picked up his phone to vent his anger at getting a parking ticket, his message on WeChat was a drop in the ocean of daily posts on China’s biggest social network.

But soon after his tirade against “simple-minded” traffic cops in June, he found himself in the tentacles of the communist country’s omniscient surveillance apparatus.

Chen quickly deleted the post, but officers tracked him down and detained him within hours, accusing him of “insulting the police”.

He was locked up for five days for “inappropriate speech”.

His case — one of the thousands logged by a dissident and reported by local media — laid bare the pervasive monitoring that characterises life in China today.

Its leaders have long taken an authoritarian approach to social control.

But since President Xi Jinping took power in 2012, he has reined in the relatively freewheeling social currents of the turn of the century, using a combination of technology, law and ideology to squeeze dissent and preempt threats to his rule.

Ostensibly targeting criminals and aimed at protecting order, social controls have been turned against dissidents, activists and religious minorities, as well as ordinary people — such as Chen — judged to have crossed the line.

– Eyes in the sky –

The average Chinese citizen today spends nearly every waking moment under the watchful eye of the state.

Research firm Comparitech estimates the average Chinese city has more than 370 security cameras per 1,000 people — making them the most surveilled places in the world — compared with London’s 13 or Singapore’s 18 per 1,000.

The nationwide “Skynet” urban surveillance project has ballooned, with cameras capable of recognising faces, clothing and age.

“We are being watched the whole time,” an environmental activist who declined to be named told AFP.

The Communist Party’s grip is most stark in the far-western region of Xinjiang, where facial recognition and DNA collection have been deployed on mainly Muslim minorities in the name of counter-terrorism.

The Covid-19 pandemic has turbo-charged China’s monitoring framework, with citizens now tracked on their smartphones via an app that determines where they can go based on green, yellow or red codes.

Regulations rolled out since 2012 closed loopholes that allowed people to purchase SIM cards without giving their names, and mandated government identification for tickets on virtually all forms of transport.

– Online offences –

There is no respite online, where even shopping apps require registration with a phone number tied to an identification document.

Wang, a Chinese dissident speaking to AFP under a pseudonym due to safety concerns, recalled a time before Xi when censors were not all-knowing and “telling jokes about (former Chinese president) Jiang Zemin on the internet was actually very popular”.

But the Chinese internet — behind the “Great Firewall” since the early 2000s — has become an increasingly policed space.

Wang runs a Twitter account tracking thousands of cases of people detained, fined or punished for speech acts since 2013.

Thanks to the real-name verification system as well as cooperation between police and social media platforms, people have been punished for a vast array of online offences.

Platforms such as Weibo employ thousands of content moderators and automatically block politically sensitive keywords, such as tennis star Peng Shuai’s name after she accused a senior politician of sexual assault last year.

Cyberspace authorities are proposing new rules that would force platforms to monitor comments sections on posts — one of the last avenues for people to voice their grievances online.

– Ideological policing –

Many of the surveillance technologies in use have been embraced in other countries.

“The real difference in China is the lack of independent media and civil society able to provide meaningful criticism of innovations or to point out their many flaws,” Jeremy Daum, from the Paul Tsai China Center at Yale Law School, told AFP.

Xi has reshaped Chinese society, with the Communist Party stipulating what citizens “ought to know, to feel, to think, and say, and do”, Vivienne Shue, professor emeritus of contemporary China studies at Oxford University, told AFP.

Youngsters are kept away from foreign influences, with authorities banning international books and forbidding tutoring companies from hiring overseas teachers.

Ideological policing has even extended to fashion, with television stations censoring tattoos and earrings on men.

“What disturbs me more is not the censorship itself, but how it shaped the ideology of people,” said Wang, the Twitter account owner.

“With dissenting information being eliminated, every website becomes a cult, where the government and leaders have to be worshipped.”

tjx/je/kma/axn/qan

from:    https://news.yahoo.com/watched-whole-time-chinas-surveillance-030215860.html

Well, This Makes Me Feel Comfortable

FACEBOOK ENGINEERS: WE HAVE NO IDEA WHERE WE KEEP ALL YOUR PERSONAL DATA

In a discovery hearing, two veteran Facebook engineers told the court that the company doesn’t keep track of all your personal data.

IN MARCH, two veteran Facebook engineers found themselves grilled about the company’s sprawling data collection operations in a hearing for the ongoing lawsuit over the mishandling of private user information stemming from the Cambridge Analytica scandal.

The hearing, a transcript of which was recently unsealed, was aimed at resolving one crucial issue: What information, precisely, does Facebook store about us, and where is it? The engineers’ response will come as little relief to those concerned with the company’s stewardship of billions of digitized lives: They don’t know.

The admissions occurred during a hearing with special master Daniel Garrie, a court-appointed subject-matter expert tasked with resolving a disclosure impasse. Garrie was attempting to get the company to provide an exhaustive, definitive accounting of where personal data might be stored in some 55 Facebook subsystems. Both veteran Facebook engineers, with according to LinkedIn two decades of experience between them, struggled to even venture what may be stored in Facebook’s subsystems. “I’m just trying to understand at the most basic level from this list what we’re looking at,” Garrie asked.

“I don’t believe there’s a single person that exists who could answer that question,” replied Eugene Zarashaw, a Facebook engineering director. “It would take a significant team effort to even be able to answer that question.”

When asked about how Facebook might track down every bit of data associated with a given user account, Zarashaw was stumped again: “It would take multiple teams on the ad side to track down exactly the — where the data flows. I would be surprised if there’s even a single person that can answer that narrow question conclusively.”

In an emailed statement that did not directly address the remarks from the hearing, Meta spokesperson Dina El-Kassaby told The Intercept that a single engineer’s inability to know where all user data was stored came as no surprise. She said Meta worked to guard users’ data, adding, “We have made — and continue making — significant investments to meet our privacy commitments and obligations, including extensive data controls.”

THE DISPUTE OVER where Facebook stores data arose when, as part of the litigation, now in its fourth year, the court ordered Facebook to turn over information it had collected about the suit’s plaintiffs. The company complied but provided data consisting mostly of material that any user could obtain through the company’s publicly accessible “Download Your Information” tool.

Facebook contended that any data not included in this set was outside the scope of the lawsuit, ignoring the vast quantities of information the company generates through inferences, outside partnerships, and other nonpublic analysis of our habits — parts of the social media site’s inner workings that are obscure to consumers. Briefly, what we think of as “Facebook” is in fact a composite of specialized programs that work together when we upload videos, share photos, or get targeted with advertising. The social network wanted to keep data storage in those nonconsumer parts of Facebook out of court.

In 2020, the judge disagreed with the company’s contention, ruling that Facebook’s initial disclosure had indeed been too sparse and that the company must reveal data obtained through its oceanic ability to surveil people across the internet and make monetizable predictions about their next moves.

Facebook’s stonewalling has been revealing on its own, providing variations on the same theme: It has amassed so much data on so many billions of people and organized it so confusingly that full transparency is impossible on a technical level. In the March 2022 hearing, Zarashaw and Steven Elia, a software engineering manager, described Facebook as a data-processing apparatus so complex that it defies understanding from within. The hearing amounted to two high-ranking engineers at one of the most powerful and resource-flush engineering outfits in history describing their product as an unknowable machine.

The special master at times seemed in disbelief, as when he questioned the engineers over whether any documentation existed for a particular Facebook subsystem. “Someone must have a diagram that says this is where this data is stored,” he said, according to the transcript. Zarashaw responded: “We have a somewhat strange engineering culture compared to most where we don’t generate a lot of artifacts during the engineering process. Effectively the code is its own design document often.” He quickly added, “For what it’s worth, this is terrifying to me when I first joined as well.”

THE REMARKS IN the hearing echo those found in an internal document leaked to Motherboard earlier this year detailing how the internal engineering dysfunction at Meta, which owns Facebook and Instagram, makes compliance with data privacy laws an impossibility. “We do not have an adequate level of control and explainability over how our systems use data, and thus we can’t confidently make controlled policy changes or external commitments such as ‘we will not use X data for Y purpose,’” the 2021 document read.

The fundamental problem, according to the engineers in the hearing, is that Facebook’s sprawl has made it impossible to know what it consists of anymore; the company never bothered to cultivate institutional knowledge of how each of these component systems works, what they do, or who’s using them. There is no documentation of what happens to your data once it’s uploaded, because that’s just never been something the company does, the two explained. “It is rare for there to exist artifacts and diagrams on how those systems are then used and what data actually flows through them,” explained Zarashaw.

“It is rare for there to exist artifacts and diagrams on how those systems are then used and what data actually flows through them.”

Facebook’s inability to comprehend its own functioning took the hearing up to the edge of the metaphysical. At one point, the court-appointed special master noted that the “Download Your Information” file provided to the suit’s plaintiffs must not have included everything the company had stored on those individuals because it appears to have no idea what it truly stores on anyone. Can it be that Facebook’s designated tool for comprehensively downloading your information might not actually download all your information? This, again, is outside the boundaries of knowledge.

“The solution to this is unfortunately exactly the work that was done to create the DYI file itself,” noted Zarashaw. “And the thing I struggle with here is in order to find gaps in what may not be in DYI file, you would by definition need to do even more work than was done to generate the DYI files in the first place.”

The systemic fogginess of Facebook’s data storage made answering even the most basic question futile. At another point, the special master asked how one could find out which systems actually contain user data that was created through machine inference.

“I don’t know,” answered Zarashaw. “It’s a rather difficult conundrum.”

Update: September 7, 2022, 9:56 p.m. ET
This story has been updated to include a statement from Meta sent after publication.

from:    https://theintercept.com/2022/09/07/facebook-personal-data-no-accountability/

Surveillance from the Floor

Did Amazon Buy iRobot To Map Inside Your Home?

BY TYLER DURDEN
WEDNESDAY, AUG 10, 2022 – 11:45 AM

Amazon.com Inc.’s $1.7 billion acquisition of robot vacuum cleaner company iRobot Corp. is a move by the megacorporation to use Roombas to map the interior of homes. This data type is a digital gold mine for Amazon because if marketers know more about what’s inside, they can easily create tailormade ads.

From a market perspective, Amazon’s acquisition of iRobot is to gain deeper insight into customers’ homes via the autonomous robotic vacuum cleaner called “Roomba.”

The latest model of the Roomba, called J7, has a front-facing, AI-powered camera that maps out each room and will identify nearly everything in its path, such as floor plans, where the kitchen is, which space is the master bedroom, and where the kids sleep, as well as items on the floor.

“Slightly more terrifying, the maps also represent a wealth of data for marketers. The size of your house is a pretty good proxy for your wealth. A floor covered in toys means you likely have kids. A household without much furniture is a household to which you can try to sell more furniture. This is all useful intel for a company such as Amazon which, you may have noticed, is in the business of selling stuff,” Bloomberg said. 

Roomba’s surveillance from within the home is pure digital gold, as Amazon’s ambition to learn more about the customer will allow marketers to sell more junk.

Vice News said, “leaked documents acquired by Motherboard revealed that one of the goals of Astro [Amazon’s robot] was to create a robot that intelligently plotted out the interior of a user’s homes, even creating heat maps of highly trafficked areas.”

Amazon customers haven’t received Astro well for privacy reasons, and the same could happen with robot vacuums following the acquisition. Some on Twitter are already calling the Amazon/iRobot deal “pure dystopia.”

People are starting to catch onto Amazon’s mass surveillance program of the household:

So, what iRobot brings to Amazon is the ability to embed its vast surveillance infrastructure into what appears to be a harmless vacuum, but just as Echo smart speakers are always ‘listening,’ perhaps the vacuum will always be watching.

As a reminder, Amazon has a frightening partnership with the Central Intelligence Agency — maybe it’s time to ditch the Roomba.

from:    https://www.zerohedge.com/technology/pure-dystopia-amazon-buys-irobot-map-inside-your-home

Smile for the Camera

All the Data Amazon’s Ring Cameras Collect About You

The popular security devices are tracking (and sharing) more than you might think.
Amazon Ring doorbell camera mounted on wall
PHOTOGRAPH: FORTGENS PHOTOGRAPHY/SHUTTERSTOCK

IF YOU WALK through your local neighborhood—providing you live in a reasonably large town or city—you’ll be caught on camera. Government CCTV cameras may record your stroll, but it is increasingly likely that you’ll also be captured by one of your neighbors’ security cameras or doorbells. It’s even more likely that the camera will be made by Ring, the doorbell and security camera firm owned by Amazon.

Since Amazon splashed out more than a billion dollars for the company in 2018, Ring’s security products have exploded in popularity. Ring has simultaneously drawn controversy for making deals (and sharing data) with thousands of police departments, helping expand and normalize suburban surveillance, and falling to a string of hacks. While the cameras can provide homeowners with reassurance that their property is secure, critics say the systems also run the risk of reinforcing racism and racial profiling and eroding people’s privacy.

Videos shared from security cameras and internet-connected doorbells have also become common on platforms like Facebook and TikTok, raking in millions of views. “Ring impacts everybody’s privacy,” says Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation. “Most immediately, it impacts the people who walk down the streets every day, where the cameras are pointing out.”

While Ring is far from the only maker of smart doorbells and cameras—Google’s Nest line is another popular option—its connections to law enforcement have drawn the most criticism, as when it recently handed over data without warrants. So, what exactly does Ring collect and know about you?

What Ring Knows About You

Whenever you use any tech, it’s collecting data about you. Spotify uses the data it collects to work out your mood, Slack knows how many messages you send. Ring’s products are no different. Ring’s privacy policy—running 2,400 words—and its terms of service detail what it collects about you and how it uses that information. In short: It’s a lot.

Ring gets your name, phone number, email and postal address, and any other information you provide to it—such as payment information or your social media handles if you link your Ring account to Facebook, for instance. The company also gets information about your Wi-Fi network and its signal strength, and it knows you named your camera “Secret CIA Watchpoint,” as well as all the other technical changes you make to your cameras or doorbells.

In March 2020, a BBC information request revealed that Ring keeps detailed records of people’s doorbell activity. Every doorbell press was logged. Each motion the camera detected was stored. And details were saved every time someone zoomed in on footage on their phone. In just 129 days, 4906 actions were recorded. (Ring says it does not sell people’s data.)

Ring can also collect the video and audio your camera records—the system doesn’t record all the time, but it can be triggered when it senses movement. Ring says its cameras can detect movement “up to 155 degrees horizontally” and across distances of up to 25 feet. This means there’s a good chance cameras can be triggered by people walking down the street or pick up conversations of passersby. According to tests by Consumer Reports, some Ring cameras can record audio from about 20 feet away.

Jolynn Dellinger, a senior lecturing fellow focusing on privacy and ethics at Duke University’s school of law, says recording audio when someone is on the street is a “serious problem” for privacy and may change how people behave. “We operate with a sense of obscurity, even in public,” Dellinger says. “We are in danger of increasing surveillance of everyday life in a way that is not consistent with either our expected views or really what’s best for society.” In October 2021, a British woman won a court case that said her neighbor’s Ring cameras, which overlooked her house and garden, broke data laws.

Ring’s privacy policy says it can save videos of subscribers to its Ring Protect Plan, a paid service that provides an archive of 180 days of video and audio captured. The company says people can log in to the service to delete the videos, but the company may ultimately keep them anyway. “Deleted Content and Ring Protect Recordings may be stored by Ring in order to comply with certain legal obligations and are not retrievable without a valid court order,” the privacy policy says.

Ring can also keep videos shared to its Neighbors’ app—an app where people and law enforcement agencies can share alerts about “crimes” and post their videos of what is happening around the homes. (There are rules about what people are allowed to post.)

Ring’s privacy policy and terms of service allow it to use all this information it collects in multiple ways. It lists 14 ways the company can use your data—from improving the service Ring provides and protecting against fraud to conducting consumer research and complying with legal requirements. Its privacy policy includes the ambiguous statement: “We also may use the personal information we collect about you in other ways for which we provide specific notice at the time of collection and obtain your consent if required by applicable law.” Ring spokesperson Sarah Rall says this could apply if the company added features or use cases that are not already covered by its privacy policy. “We would provide additional notice or get permission as needed,” Rall says.

While Ring’s privacy policies apply to those who purchase its devices, people who are captured in footage or audio don’t have a chance to agree to them. “Privacy, security, and customer control are foundational to Ring, and we take the protection of our customers’ personal and account information seriously,” Rall says.

Ultimately, you agree to give Ring permission to control the “content” you share—including audio and video—while you own the intellectual property to it. The company’s terms of service say you give it an “unlimited, irrevocable, fee free and royalty-free, perpetual, worldwide right” to store, use, copy, or modify content you share through Neighbors or elsewhere online. (Audio recording can be turned off in Ring’s settings.)

“When I went out to buy a security camera last year, I looked for ones only that did local storage,” says Jen Caltrider, the lead researcher on Mozilla’s Privacy Not Included, which evaluates the privacy and security of products. Caltrider says people should try to keep as much control of their data as possible and not store files in the cloud unless they need to. “I don’t want any company having this data that I can’t control. I want to be able to control it.”

How Ring Works With Police

Ring’s deals with police forces—both in the US and the UK—have proved controversial. For years, the company has partnered with law enforcement agencies, providing them with cameras and doorbells that can be given to residents. By the start of 2021, Ring had partnered with more than 2,000 US law enforcement and fire departments. Documents have shown how Ring also controls the public messaging of police departments it has partnered with. “There is nothing mandating Ring build a tool that is easily accessible and helpful to police,” Guariglia says.

Rings’ terms of service say that the company may “access, use, preserve and/or disclose” videos and audio to “law enforcement authorities, government officials, and/or third parties” if it is legally required to do so or needs to in order to enforce its terms of service or address security issues. Government officials could include any “regulatory agency or legislative committee that issues a legally binding request for information,” Rall says. For the six months between January and June 2022, Ring says it had more than 3,500 law enforcement requests in the US.

In December 2021, researchers at New York University’s (NYU) Policing Project released an audit of Ring’s relationship with law enforcement and the way its Neighbors app works. (Ring provided data to the audit’s authors about how the service functions.) The report details concerns that Ring could exacerbate police bias against Black and brown communities and notes a lack of transparency around how Neighbors works.

“We see a lot of risk in having police responding to calls about homeless people or low-level drug use,” says Max Isaacs, a staff attorney with the Policing Project and a coauthor of the audit. “When police are relying on private devices like Ring devices, it creates a democratic deficit, because now police can greatly expand their surveillance capabilities,” Isaacs says. Citizen-on-citizen surveillance, which Isaacs calls “lateral surveillance,” lacks scrutiny. “They can have thousands of cameras in a jurisdiction … without any legislative oversight.”

In response to the NYU audit, Ring said it made more than 100 changes to its service. These changes hint at how the app may have previously been misused by law enforcement agencies. Police are now required to use official accounts to request content about crimes, Ring will not donate devices to law enforcement bodies, and it will “no longer will participate in police sting operations.”

Ring also agreed to stop citing the impact its cameras have on crime—previous claims say it reduces crime in areas—until that has been verified by an independent study. “Ring also reviewed all of its marketing and social media materials to remove any claims about crime reduction,” the report says. The change that will do most to protect people’s privacy may be the introduction of end-to-end encryption, which means the company can’t access recordings of users who have the feature enabled. However, it isn’t turned on by default. Here’s how to turn it on.

Update 10:30 am ET, 8-5-22: Ring spokesperson Sarah Rall’s statement that the company “would provide additional notice or get permission as needed” pertains to other ways Ring may use your personal information, not to its data-retention policies.

from:    https://www.wired.com/story/ring-doorbell-camera-amazon-privacy/

They’ve Got You – Coming & Going

New Document Exposes How This Company Tracks Car Locations In Real-Time
Tyler Durden's Photo

by Tyler Durden
Thursday, Mar 18, 2021 – 09:40 PM

According to a document obtained by Motherboard, a tiny surveillance contractor based in Charleston, South Carolina, can locate and track newer model cars in any country. This data is being packaged up into a new service and pitched to the US government as a powerful surveillance technology.

“Ulysses can provide our clients with the ability to remotely geo-locate vehicles in nearly every country except for North Korea and Cuba on a near real-time basis,” the document written by The Ulysses Group, reads. “Currently, we can access over 15 billion vehicle locations around the world every month,” the document adds.

In new automobiles, intelligent sensors transmit an array of data (even including location) to the automaker or third parties. Aggregator companies then take this data and integrate them into packages based on the needs of their clients.

“Vehicle telematics is data transmitted from the vehicle to the automaker or OEM through embedded communications systems in the car,” the Ulysses document continues. “Among the thousands of other data points, vehicle location data is transmitted on a constant and near real-time basis while the vehicle is operating.”

The document suggests Ulysses’ tracking service could be used for military surveillance operations:

“We believe that this one attribute will dramatically enhance military intelligence and operational capabilities, as well as reduce the costs and risk footprint of ISR [intelligence, surveillance, reconnaissance] assets currently used to search for and acquire mobile targets of interest.” 

 “Whether you want to geo-locate one vehicle or 25.000.000 as shown here. Currently, we can access over 15 billion vehicle locations around the world every month,” the document concludes. 

Motherboard sent the document to Senator Ron Wyden (D-Oregon). Wyden spokesperson Keith Chu responded in an email statement:

“Far too little is known about how private information is being bought and sold. Senator Wyden is conducting an ongoing investigation into the sale of personal data, particularly via data brokers, to put some sunlight on this shady industry. Our office is continuing to perform oversight into where data brokers are acquiring Americans’ information, and who they’re selling it to.”

Motherboard noted Ulysses previously worked with US Special Operations Command on a different piece of technology to “analyze how peer and near-peer competitor countries were making economic and financial investments in Africa and Central and South America.” 

President of The Ulysses Group, Andrew Lewis, told Motherboard in an email that “any proprietary promotional material we may have produced is aspirational and developed based on publicly available information about modern telematics equipment.”

“We do not have any contracts with the government or any of its agencies related to our work in the field and we have never received any funding whatsoever from the government related to telematics,” Lewis added.

Here’s the full document: 

While the document does not specify how the surveillance firm procures its data, the luxuries of owning a modern car tied to the “internet of things” appear to have their downfalls as car companies or third parties, or even the government can track these vehicles in real-time. 

In a world where COVID has accelerated the surveillance state, many people are wondering how to escape the Orwellian grid of surveillance and social control, well, first, own a car with limited technology embedded within – also we offer some simple steps to disappear from the surveillance matrix. 

from:    https://www.zerohedge.com/technology/new-document-exposes-how-company-can-track-your-car-real-time?utm_campaign=&utm_content=Zerohedge%3A+The+Durden+Dispatch&utm_medium=email&utm_source=zh_newsletter

Keepin’ it on the DL

Why OpSec Has Never Been More Important

by Tyler Durden
Wednesday, Jan 13, 2021 – 0:05

Authored by Daisy Luther via The Organic Prepper blog,

If you’ve been in prepper circles for long, you’ve probably heard the term OpSec. It is taken from military jargon and it’s short for Operations Security. In the preparedness and survival world, it generally means not letting other people know that you are prepped, or if they know, they definitely don’t know the specifics of what you have.

Not only do we want to keep the level of our preparedness private, these days, keeping our opinions private might be likewise beneficial from a security perspective. More on that in a moment.

Trigger Warning: There’s no way I can write this article without ticking somebody off. Some readers will feel that I’m siding with the right and others will feel like I’m siding with the left. I’m not because I am not a Democrat or a Republic, nor am I a conservative or a liberal. I’m a critical thinker with diverse opinions that fall into all sorts of categories. Yet others will feel I didn’t go far enough or that there’s some “fact” or conspiracy that I didn’t reveal. I’m not an ice cream cone. I can’t make everyone happy. Also, there may be some swearing.

What is OpSec?

Here’s a definition for those who aren’t familiar with the concept.

Operations security (OPSEC) is a process that identifies critical information to determine if friendly actions can be observed by enemy intelligence, determines if information obtained by adversaries could be interpreted to be useful to them, and then executes selected measures that eliminate or reduce adversary exploitation of friendly critical information.

In a more general sense, OPSEC is the process of protecting individual pieces of data that could be grouped together to give the bigger picture (called aggregation). OPSEC is the protection of critical information deemed mission-essential from military commanders, senior leaders, management or other decision-making bodies. The process results in the development of countermeasures, which include technical and non-technical measures such as the use of email encryption software, taking precautions against eavesdropping, paying close attention to a picture you have taken (such as items in the background), or not talking openly on social media sites about information on the unit, activity or organization’s Critical Information List. (source)

This article explains the concept more thoroughly.

OpSec goes hand in hand with the gray man principle. Here’s Selco’s definition of being the gray man.

It is a simple concept that comes to be very important when SHTF, and it is often completely opposite to how a lot of preppers are planning to look or act.

In the shortest definition, it is staying uninteresting or simply looking and acting like most of the people around you in a particular moment.

It can be used in a lot of situations when SHTF, during prolonged periods of time, or during short-term events. (source)

As tensions increase dramatically in the United States, many people will find it more important than ever to practice these principles.

Extraordinary things are happening.

Over the past few years, the United States has become extremely polarized – so much so that violence can break out simply because two people or groups of people support different presidential candidates.

We’re seeing “othering” on an extraordinary level as Big Tech and the Mainstream Media throw gasoline on the raging dumpster fire that is our recent election. There’s a purge of conservative voices that goes beyond anything I’ve personally seen – way beyond the purge of alternative media a couple of years ago.

While Donald Trump is on his way out of the White House in just under two weeks, the fact remains that the two largest social media outlets in the world, Facebook and Twitter, have suspended the accounts of a sitting President of the United States. Now, they’re private businesses – they get to make their own rules and they’re protected from any legal fallout by Section 230, unlike the rest of us folks on the internet. However, the fact that they would take such an action is simply astounding in its audacity.

Go to a different outlet, you said? Well, that would be a great idea so we did. Conservatives and libertarians went to Parler in droves and the MSM sobbed into their lattes that it was a threat to democracy. And guess what else happened? Google effectively killed Parler today by removing the app from the store. José Castañeda, a Google spokesperson, said:

“In order to protect user safety on Google Play, our longstanding policies require that apps displaying user-generated content have moderation policies and enforcement that removes egregious content like posts that incite violence.

All developers agree to these terms and we have reminded Parler of this clear policy in recent months.

We’re aware of continued posting in the Parler app that seeks to incite ongoing violence in the U.S.

We recognize that there can be reasonable debate about content policies and that it can be difficult for apps to immediately remove all violative content, but for us to distribute an app through Google Play, we do require that apps implement robust moderation for egregious content.

In light of this ongoing and urgent public safety threat we are suspending the app’s listings from the Play Store until it addresses these issues.” (source)

Apple has also given Parler an ultimatum to either moderate content or get nuked there too.

And speaking of egregious things, the glaring double standard between the media coverage in Washington DC on Jan. 6th and the coverage of “protests” all over the country for the past year is particularly flagrant.

This blatant silencing of dissent is heinous and reminiscent of Communist China or North Korea. I mean, the DoJ did just revive the legality of firing squads. While we’re not currently being executed for dissenting opinions, people are losing their livelihoods, having their homes vandalized, and being ostracized. ABC News literally called for a cleansing of Trump supporters.

“Even aside from impeachment and 25th Amendment talk, Trump will be an ex-president in 13 days,” ABC’s Rick Klein and MaryAlice Parks wrote for The Note on Thursday. “The fact is that getting rid of Trump is the easy part. Cleansing the movement he commands, or getting rid of what he represents to so many Americans, is going to be something else.”

It now reads, “Cleaning up the movement he commands, or getting rid of what he represents to so many Americans, is going to be something else.”

Klein also shared the original phrase on Twitter before deleting it.(source)

Do they wish to “cleanse” all 74,223,744 people whose votes were considered official? It’s rather reminiscent of a recent hullabaloo when another guy on Twitter wanted to send Trump voters to re-education camps. (See #7 here.)

Incidentally – neither Klein’s account nor ABC News’s account were suspended by Twitter. Nor was that guy who wants to forcibly re-educate people. Just the President’s. Oh and a whole bunch of other people who had the audacity to be publicly supportive of him. But not those cleanser and re-education people. They’re cool.

Know what you’re getting yourself into before taking action.

If you’re anything like me, your initial reaction is, “F*ck this. I’ll say what I want.” I agree wholeheartedly that this is outrageous censorship on a massive scale, it’s virtual book-burning, and the double standard is utter bullsh*t and I’m furious about it.

But this is, first and foremost, a website about survival and preparedness. This is not a site about staging a revolution and I have really limited the coverage of politics since the 2016 election. I want it to be a place where everyone feels welcome to learn about preparedness and the events that affect us, regardless of their political beliefs, their religious beliefs, or which foot they put in their pants leg first.

There will be people out there who feel it is their duty to fight. There are people who support that and people who do not.

Unless you are making a conscious decision to get out there in the thick of the battle, imperiling your livelihood and risking ostracization due to cancel culture, it may be time for you to consider strengthening your OpSec. If you are going into this with your eyes open, then more power to you all.

Crackdowns like what we’re seeing now start with polarization and information blackouts. They can lead to far worse scenarios.

Selco wrote:

It is a situation where all stakes are much higher, and solutions-actions  that the government ( ruling  party, military leaders or whoever in your case) wants to achieve will be attempted with all means. That can include some new rules where what you think about it usually does not mean anything.

A lot of preppers think about “martial law” but in reality, they think about it still in normal terms, with rights, law, constitution, and rules…

You cannot defy military, at least not openly, because they will deal with you fast and efficiently. In times like that it is so easy to get labeled that you are dangerous, an enemy of the state, a terrorist or anything similar, and most probably you will not have any help.

Forget about the movie illusions of openly being a freedom fighter.

No matter how well-organized you are, those who impose martial law have better organization than you. Remember that martial law usually means an information blackout.  “They” will own information and present it to the public the way that they want to present it. (source)

While I’m not suggesting we’re necessarily headed for martial law, we have stepped into a brand new world where our media is tightly controlled and our every decision or utterance can come back to haunt us.

Why do we need to focus more stringently on OpSec now?

I want to focus on the survival and preparedness aspects of this unbelievable situation we now find ourselves in.

As I’ve written before, survival is about surviving. Some people find this philosophy cowardly and feel we should all be willing to choose the hill of their choice to die upon. Others believe that it’s better to be strategic, live to fight another day, and choose their battles. This is a personal decision.

If you don’t wish to be involved in the political aspects of the things going on right now, if you want to quietly live out your life with limited conflict, and if your focus is on the safety of your family, you need to think about the information you are giving away about yourself. This is not just information for people of one political party or race. It’s for those who don’t want to be targeted because of their beliefs.

  • Don’t make a visual statement. Do you have any political paraphernalia on display? Bumper stickers? Yard signs? Banners? T-shirts or hats?
  • Avoid political conversations. Back when I was a kid, I was told that politics and religion were topics that were bad manners unless you were in the company of those you knew really well.
  • Keep your preps under wraps. Trust me, when The Great Toilet Paper Crisis of 2021 happens – and it will – you don’t want to be the house with all the toilet paper that the guy repairing your furnace saw. Nobody needs to see your preps. Put things in cardboard boxes with misleading labels like “Christmas” if someone is going to be in the area where you store supplies. Don’t have shelves and shelves of canned goods out in your kitchen.
  • Tighten your circle. If you thought 2020 was bad, 2021 is here and it’s old enough to drink. As expected, 2021 is not going to be a walk in the park. Selco recommends that the worse things become, the smaller your circle should be. Focus your efforts on the things you can control and your energy on the people in your inner circle.
  • Remember what you learned about people. We learned a lot about how those around us handled stress during the first round of lockdowns. Don’t forget the lessons you learned about those in your circle, as well as what you discovered about friends, neighbors, and coworkers. A lot of folks were really surprised by the behavior of others when they were under stress. Think about who you really want to let in – this may have changed after the past year. Do your best to make sure that people are truly worthy of your trust.
  • Be neutral on social media. Remember, the internet is forever. Even if you delete an ill-advised post, someone may have taken a screenshot or be able to find that post on the Wayback Machine. People don’t have to have the visible proof of those posts either to remember you dislike Trump or Biden, or that you’re super liberal or super conservative. Posting meme after meme expressing your adoration for certain political figures or beliefs is the digital equivalent of running your mouth in a crowded bar. You never know who’s watching or listening, nor do you know how that might come back to haunt you.

Don’t make yourself a target.

When people are hungry they’ll do things they might never have imagined doing before, like stealing food. Many of the jobs lost in 2020 are not going to be coming back in 2021, people are dealing with major financial problems, and we have supply chain issues. There’s a very real chance that we will see greater poverty in America happening to a greater number of people than we’ve ever seen in our lifetimes.

Those people will be wracking their brains trying to figure out how to survive. Don’t give them a reason to think of your place as a supply nirvana.

(If you are in that desperate position, check out this book – it’s free and it may help you make difficult decisions.)

When people are angry, they’ll also do things they normally would not and mob mentality is contagious. Take this former CEO, for example.

“My decision to enter the Capitol was wrong, and I am deeply regretful to have done so,” Rukstales said in a statement. “Without qualification and as a peaceful and law-abiding citizen, I condemn the violence and destruction that took place in Washington.”

Rukstales also apologized to his family, colleagues and “fellow countrymen” for his actions.

“It was the single worst personal decision of my life,” the exec’s statement continued. “I have no excuse for my actions and wish that I could take them back.” (source)

Don’t make yourself a target for the rage of people who aren’t behaving rationally. Your memes and banners and bumper stickers aren’t going to change their minds.

from:    https://www.zerohedge.com/personal-finance/why-opsec-has-never-been-more-important?utm_campaign=&utm_content=Zerohedge%3A+The+Durden+Dispatch&utm_medium=email&utm_source=zh_newsletter

“I’ll Be Watching You” Ouch

Lockdown civilization: phase one and phase two

by Jon Rappoport

January 6, 2021

The China lockdown of 50 million citizens overnight was a key element in the long-standing plan to foist a fake pandemic on humanity.

That lockdown provided the model for the rest of the world.

We are now in phase one of Lockdown Civilization.

The “scientific” rationale? THE VIRUS. The virus that isn’t there. The virus whose existence is unproven.

But the story line works: “We have to follow the China model because the pandemic is sweeping across the globe…”

Close on the heels of this con job, we have the intro to phase two: “In order to deal with future pandemics, we must install a new planetary system of command and control; human behavior must be modified.”

Translation: wall to wall surveillance at a level never achieved before; universal guaranteed income for every human, tied to obedience to all state directives; violate those directives and income is reduced or canceled; the planting of nano devices inside the body which will broadcast physiological changes to central command, and which will receive instructions that modify mood and reaction…

Phase one lockdowns prepare the citizenry to accept phase two.

In other words, phase one had nothing to do with a virus. It was part of the technocratic revolution.

I call your attention to a stunning article in The Atlantic. “The Panopticon Is Already Here” (9/20), by Ross Andersen.

Here are significant excerpts:

“Artificial intelligence has applications in nearly every human domain, from the instant translation of spoken language to early viral-outbreak detection. But Xi [Xi Jinping, president of China] also wants to use AI’s awesome analytical powers to push China to the cutting edge of surveillance. He wants to build an all-seeing digital system of social control, patrolled by precog algorithms that identify potential dissenters in real time.”

“China already has hundreds of millions of surveillance cameras in place. Xi’s government hopes to soon achieve full video coverage of key public areas. Much of the footage collected by China’s cameras is parsed by algorithms for security threats of one kind or another. In the near future, every person who enters a public space could be identified, instantly, by AI matching them to an ocean of personal data, including their every text communication, and their body’s one-of-a-kind protein-construction schema. In time, algorithms will be able to string together data points from a broad range of sources—travel records, friends and associates, reading habits, purchases—to predict political resistance before it happens. China’s government could soon achieve an unprecedented political stranglehold on more than 1 billion people.”

“China is already developing powerful new surveillance tools, and exporting them to dozens of the world’s actual and would-be autocracies. Over the next few years, those technologies will be refined and integrated into all-encompassing surveillance systems that dictators can plug and play.”

“China’s government could harvest footage from equivalent Chinese products. They could tap the cameras attached to ride-share cars, or the self-driving vehicles that may soon replace them: Automated vehicles will be covered in a whole host of sensors, including some that will take in information much richer than 2-D video. Data from a massive fleet of them could be stitched together, and supplemented by other City Brain streams, to produce a 3-D model of the city that’s updated second by second. Each refresh could log every human’s location within the model. Such a system would make unidentified faces a priority, perhaps by sending drone swarms to secure a positive ID.”

“An authoritarian state with enough processing power could force the makers of such software to feed every blip of a citizen’s neural activity into a government database. China has recently been pushing citizens to download and use a propaganda app. The government could use emotion-tracking software to monitor reactions to a political stimulus within an app. A silent, suppressed response to a meme or a clip from a Xi speech would be a meaningful data point to a precog algorithm.”

“All of these time-synced feeds of on-the-ground data could be supplemented by footage from drones, whose gigapixel cameras can record whole cityscapes in the kind of crystalline detail that allows for license-plate reading and gait recognition. ‘Spy bird’ drones already swoop and circle above Chinese cities, disguised as doves. City Brain’s feeds could be synthesized with data from systems in other urban areas, to form a multidimensional, real-time account of nearly all human activity within China. Server farms across China will soon be able to hold multiple angles of high-definition footage of every moment of every Chinese person’s life.”

“The government might soon have a rich, auto-populating data profile for all of its 1 billion–plus citizens. Each profile would comprise millions of data points, including the person’s every appearance in surveilled space, as well as all of her communications and purchases. Her threat risk to the party’s power could constantly be updated in real time, with a more granular score than those used in China’s pilot ‘social credit’ schemes, which already aim to give every citizen a public social-reputation score based on things like social-media connections and buying habits. Algorithms could monitor her digital data score, along with everyone else’s, continuously, without ever feeling the fatigue that hit Stasi officers working the late shift. False positives—deeming someone a threat for innocuous behavior—would be encouraged, in order to boost the system’s built-in chilling effects, so that she’d turn her sharp eyes on her own behavior, to avoid the slightest appearance of dissent.”

“If her risk factor fluctuated upward—whether due to some suspicious pattern in her movements, her social associations, her insufficient attention to a propaganda-consumption app, or some correlation known only to the AI—a purely automated system could limit her movement. It could prevent her from purchasing plane or train tickets. It could disallow passage through checkpoints. It could remotely commandeer ‘smart locks’ in public or private spaces, to confine her until security forces arrived.”

“Each time a person’s face is recognized, or her voice recorded, or her text messages intercepted, this information could be attached, instantly, to her government-ID number, police records, tax returns, property filings, and employment history. It could be cross-referenced with her medical records and DNA, of which the Chinese police boast they have the world’s largest collection.”

“The country [China] is now the world’s leading seller of AI-powered surveillance equipment. In Malaysia, the government is working with Yitu, a Chinese AI start-up, to bring facial-recognition technology to Kuala Lumpur’s police as a complement to Alibaba’s City Brain platform. Chinese companies also bid to outfit every one of Singapore’s 110,000 lampposts with facial-recognition cameras.

In South Asia, the Chinese government has supplied surveillance equipment to Sri Lanka. On the old Silk Road, the Chinese company Dahua is lining the streets of Mongolia’s capital with AI-assisted surveillance cameras. Farther west, in Serbia, Huawei is helping set up a ‘safe-city system,’ complete with facial-recognition cameras and joint patrols conducted by Serbian and Chinese police aimed at helping Chinese tourists to feel safe.”

“In the early aughts, the Chinese telecom titan ZTE sold Ethiopia a wireless network with built-in backdoor access for the government. In a later crackdown, dissidents were rounded up for brutal interrogations, during which they were played audio from recent phone calls they’d made. Today, Kenya, Uganda, and Mauritius are outfitting major cities with Chinese-made surveillance networks.”

“In Egypt, Chinese developers are looking to finance the construction of a new capital. It’s slated to run on a ‘smart city’ platform similar to City Brain, although a vendor has not yet been named. In southern Africa, Zambia has agreed to buy more than $1 billion in telecom equipment from China, including internet-monitoring technology. China’s Hikvision, the world’s largest manufacturer of AI-enabled surveillance cameras, has an office in Johannesburg.”

“In 2018, CloudWalk Technology, a Guangzhou-based start-up spun out of the Chinese Academy of Sciences, inked a deal with the Zimbabwean government to set up a surveillance network. Its terms require Harare to send images of its inhabitants—a rich data set, given that Zimbabwe has absorbed migration flows from all across sub-Saharan Africa—back to CloudWalk’s Chinese offices, allowing the company to fine-tune its software’s ability to recognize dark-skinned faces, which have previously proved tricky for its algorithms.”

“Having set up beachheads in Asia, Europe, and Africa, China’s AI companies are now pushing into Latin America, a region the Chinese government describes as a ‘core economic interest.’ China financed Ecuador’s $240 million purchase of a surveillance-camera system. Bolivia, too, has bought surveillance equipment with help from a loan from Beijing. Venezuela recently debuted a new national ID-card system that logs citizens’ political affiliations in a database built by ZTE…”

That gives you a chilling outline of Lockdown, phase two.

Lockdowns were never about a virus or a pandemic.

Lockdown Civilization has been in the planning and development stage for a long time.

People say, “Why? Why are they doing this?”

The short answer is, because they want to and they can.

Technocrats don’t view life as life. They view it as a system, and this is their most comprehensive system to date.

from:    https://blog.nomorefakenews.com/2021/01/06/lockdown-civilization-phase-one-and-phase-two/

Speak Your MINDS

Censorship-Free Social Network “Explodes” After Adding 200,000+ New Users In Just A Few Days

By John Vibes / Truth Theory

The censorship-free alternative media platform Minds went down temporarily on Thursday, but came back online shortly after. When the site came back online, representatives of Minds say that the outage was a result of over 200,000+ new people signing up for the site one the same day. A large number of sign-ups were from Thailand, where increased internet censorship has forced users from mainstream platforms like Facebook and Twitter.

“Yesterday we saw 200,000+ new users. We are thrilled to provide privacy, internet freedom and digital rights for Thai netizens. This is exactly the reason Minds exists,” Minds CEO Bill Ottman said in a statement.

Some of the most notable social media users to recently migrate to Minds includes Wiroj Lakkana-adisorn, an MP for the disbanded Future Forward Party; social critic Sarinee Achavanuntakul, writer-translator Tomorn Sookprecha, satirical TV host Winyu “John” Wongsurawat, and academic Pavin Chachavalpongpun.

“We are immediately building out our translation and localization framework for Thai and many other languages. This should be finished within a few weeks. All of our project is fully open source at https://developers.minds.com and people can submit requests at https://gitlab.com/minds. They can also inspect our software to make sure we aren’t manipulating algorithms or user privacy. We already have Thai developers helping with the code and building new tools,” Ottman said.

Last year, Ottman reported that the site had about 200,000 active monthly users, so gaining 200,000+ in a  few days is an incredible win for the site.

Minds was co-founded in 2011 by Bill Ottman and John Ottman, other cofounders were Mark Harding, Ian Crossland, and Jack Ottman.

In June 2017, the company raised over $1 million in the fastest equity-crowdfunded sale at the time. Then in March 2018, Minds exited Beta and launched a white paper and testnet for its new native mobile apps and Ethereum blockchain integration.

In October 2018, Minds raised $6 million in Series A funding from Medici Ventures, an Overstock.com subsidiary. Patrick M. Byrne, the founder, and CEO of Overstock.com, also joined the site’s board of directors.

You can sing up for Minds HERE, and make sure you follow Truth Theory on Minds

from:    https://truththeory.com/2020/05/23/censorship-free-social-network-explodes-after-adding-200000-new-users-in-just-a-few-days/