Don’t Replace Facebook, Disrupt It

Tony Cartalucci | Activist Post

socialmediaFacebook is a problem. It is undoubtedly being used by special interests to manipulate and monitor entire populations both within the United States and well beyond. It represents a tool that in no way serves the people actually using it, and instead allows special interests to use the users. It is a dream global panopticon for the abusive dictators that run Western society and presume dominion over what they call an “international order.”

But in order to counter this threat, Facebook cannot simply be “replaced.” It specifically, and what it represents, must be disrupted entirely.

Facebook is a Skinner Box for Humans

Facebook has been at the center of several recent controversies that are increasingly leaving users disillusioned and in search of alternatives. At the center of these controversies is Facebook’s “news feed” feature. Ideally, news feed would work by showing on your timeline updates from those individuals and organizations you follow. There are two options for news feed – “most recent” and “top stories.” Facebook has decided to upend this feature by insidiously controlling what appears on your news feed regardless of which option you select

Now, you will no longer receive regular updates from accounts you follow, and instead will see a “filtered” version determined by Facebook’s algorithms. Many Facebook users are unaware of this fact and are perplexed as to why they are no longer receiving regular updates from accounts they follow.

Facebook’s own explanation as to why they’ve implemented this policy is as follows:

Rather than showing people all possible content, News Feed is designed to show each person on Facebook the content that’s most relevant to them. Of the 1,500+ stories a person might see whenever they log onto Facebook, News Feed displays approximately 300. To choose which stories to show, News Feed ranks each possible story (from more to less important) by looking at thousands of factors relative to each person.

Facebook’s real motivation is more likely a combination of implementing soft-censorship and an effort to monetize news feeds by forcing content makers to pay in order to access people already following them. What’s left is wealthy content makers like large corporate media outfits monopolizing the public’s attention whether the public wants it or not.

News feed has also been used in at least two involuntary social engineering experiments where the news feeds of users were manipulated without their knowledge to influence them psychologically. In the most recently exposed experiment,Facebook manipulated the news feed of some 2 million Americans in 2012 in order to increase public participation during that year’s US presidential election.

In 2013, Facebook would again manipulate news feeds of unwitting users to influence them psychologically. A report published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS) titled, “Experimental evidence of massive-scale emotional contagion through social networks,” stated in its abstract that:

We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.

Not only are the findings troubling – illustrating that Facebook possesses the ability to influence the emotions of its users unwittingly through careful manipulation of their news feeds – but the invasive, unethical methods by which Facebook conducted the experiment are troubling as well.Those involved in the experiment were neither notified before nor after the experiment was conducted, and along with news feed manipulation during the 2012 election, it appears Facebook sees the news feed feature in terms of influencing people as Facebook and its clients see fit rather than the feature being used to inform users as they themselves see fit.

What Facebook is essentially is a massive, global, digital “Skinner box.” Also known as a operant conditioning chamber, a Skinner box conditions a subject – usually an animal – to perform certain behaviors by controlling positive and negative stimuli regulated within the box. Pressing the correct lever would provide, for example, food pellets, while pressing the wrong lever would provide a painful electric shock.

Facebook, in this way, admits it regulated positive and negative stimuli in its 2013 experiment and in 2012 manipulated the behavior of subjects also through the use of specifically formulated stimuli. There is no telling what other experiments or ongoing manipulations Facebook users might be subjected to, and whether or not other IT monopolies like Google are using similar means to influence, manipulate, and condition the behavior of users.

Disrupting Facebook

The first thing many Facebook users look for upon learning of this are alternatives. One in particular, Ello, grabbed headlines recently as a “Facebook killer.” Should Facebook’s 1 billion plus user base migrate over to Ello, would there be anything to stop special interests from simply co-opting and corrupting its basic premise of not manipulating users or invading their privacy? Most likely not.

Instead, efforts to disrupt Facebook and the centralized social networking premise it represents should be made. In other words, decentralizing social networking so that no single network controls the information, rules, and regulations that define social networking in general.

On a global scale this is already being done. Nations like Russia, China, Iran, and others have produced their own indigenous versions of Facebook – separate from not only Facebook’s monopoly, but the intrusive, abusive exploitation of that monopoly by corporate-financier interests on Wall Street and in the City of London. Russia’sVK.com for example, boasts 120 million users around the world and within Russia itself, is the most popular social networking site, by far eclipsing Facebook’s market share. While the Western media criticizes VK as a tool of the Kremlin, in light of recent scandals exposed in the West, the same could be said of Wall Street and London’s use of Facebook.

But decentralizing Facebook’s grip on social networking to a national scale isn’t enough. While many may find affinity toward the current political order in Russia, some day that may no longer be the case. Further decentralization – in fact – infinite decentralization should be the ultimate goal.

Forums, Websites, and RSS Analogies

Web forums are numerous and in many ways micro social networks in and of themselves. They are built around interests in entertainment, skills and hobbies, commerce, political ideology, religion, and many other personal interests. While one must become a member of these forums to participate, anyone can search the Internet and find threads containing useful information. It would be difficult to find the “Facebook” of Internet forums – because while there are very large and well-known forums – there is no monopoly.

Creating a new social networking paradigm based along a similar notion of infinite decentralization is not only possible, it is inevitable – just as soon as programmers and developers stop trying to create the next “Facebook” and begin contemplating instead the next paradigm shift in social networking altogether – one that satisfies the growing desire to escape monopolized networks with proclivities toward invading the privacy of its users as well as manipulating and influencing them through insidious social engineering.

Image: What will come next? Another Facebook or something that will shift the paradigm of social networking entirely? Centralized networks are prone to abuse. Even networks like Ello that initially show promise hold the same weakness of over-centralization which will undoubtedly be targeted by special interests. A decentralized social networking paradigm with tools used to mesh networks together as users desire could represent just such a shift.

Imagine open source tools like Wiki or WordPress that allows anyone to create their own social network based around any specific interest or series of interests. Imagine tools like RSS feed that allows users from one social network to follow user updates on another social network without actually joining that network. Imagine being able to take your information and import it into a new social network if for whatever reason you decided you no longer like the rules, regulations, and practices of the network you were currently in – tools like WordPress’ import options that allow Blogger users to migrate over along with all their previous Blogger content.

Facebook and undoubtedly VK and other large social networks have various groups of disenfranchised users who are unable to use these networks as they truly desire. Facebook has faced criticism for demanding users to use their real names to create profiles. Minority groups that prefer anonymity could create their own social network to cater specifically to their interests and agenda. They could follow popular feeds from other social networks, but preserve their own community created by, for, and of themselves.

In this way, instead of simply trying to replace Facebook with the next soon-to-be co-opted, corrupted, and overbearing social networking monopoly, the entire paradigm will be shifted in favor of what users actually want – privacy, the ability to control what content they receive, and to associate with whom they want, how they want. With hundreds if not thousands of these interconnected but ultimately independent networks cropping up, it will be impossible for monopolistic interests to co-opt, control, or censor them all, or even a majority of them.

Read more from Tony Cartalucci at LocalOrg, where this first appeared.

More from Activistpost

Facebook Boosts News Feeds of Top 100 Media Outlets in Secret Political Experiment

Eric Blair | Activist Post

What is the point of a social network that doesn’t share your content with friends and followers? Oh, yeah, for profit, government spying, emotional experiments and now, political manipulation.

Since they went public, Facebook has been playing with their algorithms to prevent “viral” content from occurring naturally in favor of charging users to show content to their followers. This profit-seeking strategy destroyed the only thing that made Facebook useful. Now it seems to serve as little more than an oversized telephone or IM app. But underneath, in the shadows, it’s still so much more than that.

Mother Jones reports that Facebook has been conducting stealthy political experiments on users, including tweaking the news feeds of almost 2 million users to boost articles shared from the top 100 media outfits. The purpose was to test voter turnout in the 2012 election.

As Huffington Post summarizes:

Facebook quietly tweaked the news feeds of 1.9 million users before the 2012 election so they would see more “hard news” shared by friends.

That change may have boosted voter turnout by as much as 3 percent, according to a little-known study first disclosed Friday by Mother Jones.

For the study, news articles that Facebook users’ friends had posted appeared higher in their feeds — the stream of status updates, photos and articles that show up when you first sign on to the site. The researchers wanted to see whether increasing your exposure to news stories shared by friends before an election would convince you to vote.

Facebook said the news stories being shared were general in nature and not political. They came were from a list of 100 top media outlets from the New York Times to Fox News, according to the Mother Jones story, written by Micah Sifry, a democracy activist.

Lost in the reporting about this voting experiment is how dangerous it is to only boost establishment news feeds for political outcomes. What if they switch to only boosting GOP or Democrat news feeds? Could they sway elections?

Continue reading

Foster Gamble: Debunking Debunkers

Foster Gamble  | ThriveMovement.com | Aug 30 2014

I love debunkers — the REAL ones. I am a rational skeptic and I know a dedicated and skillful debunker can save us all time and help keep us from being duped yet again in dangerous and impactful ways. The problem is finding and identifying the real ones in a murky sea of fake naysayers and hating trolls with a hidden and biased agenda that does not prioritize truth.

We are living in an unprecedented era where one person or a small team can use independent and alternative media to communicate key perspectives to millions of people worldwide — in a short amount of time. Given what we are dealing with in the way of planetary demise, this is a really good thing!

The flip side, however, is that someone with very little real expertise can undermine valuable inquiry and healthy skepticism. I met one man who said he was going to watch THRIVE, but then changed his mind when he saw that it “had been debunked.” I asked if the debunking had made sense to him. “Oh I didn’t actually read it. I just saw it had been debunked.” My immediate reaction was a punch in the gut. How often are people throwing away or discrediting years of valuable fact-checked research on the grounds of a baseless attack?

Most importantly, why is that and how do we get good at sorting out the truth?


If you are challenging the dominant paradigm as peddled in the corporate media and your influence is expanding, debunkers will enter your life. It goes with the territory. One of our seasoned advisors, a successful whistleblower who has survived one wringer after another, told us before THRIVE came out, “You’re going to get it from all sides, and if you’re not taking flak, you’re probably not over the target yet.”

I want to offer some of what we have learned about debunkers, detractors, haters and trolls in relation to THRIVE.

  • Who are they?
  • Why do they do what they do?
  • How can we recognize and deal with them effectively?
  • And why bother?

An accurate assessment of what’s going on is critical if we want to create effective solutions. If we don’t have a true understanding of the problem, we won’t put our attention on the innovations that can best meet the challenges we face. So debunking the debunkers has a huge payoff. It helps us to sleuth out factual truth and create a safe environment from which to engage in meaningful public dialogue and transform our world into one that actually works for everyone.


I looked it up, and “bunk” is short for “bunkum.” It’s an old word from 19th century America that means “nonsense.” Other dictionary synonyms are “baloney, rot, hogwash, applesauce, bull, and hooey.” So when a group like snopes.com spends serious time and rigor separating the fabricated hogwash and hooey from facts and realities on the Internet, it’s a valuable service. Wikipedia used to be helpful, but is now so co-opted on virtually every controversial issue, that its merit has been severely undermined for important topics that challenge major money or control interests.


I remember the first time I heard (from my son!) that the Federal Reserve was a private corporation and that no government agency could overrule their actions. I found it hard to believe and went on the Internet to look it up. At the time, there were very few sites addressing this issue, and many of them just discredited anyone questioning the reality or wisdom of the Fed’s printing money out of nothing. I came away not knowing what to think. Fortunately, my son kept giving me more and more evidence. The same thing happened when I first heard about the military covering up their involvement with UFOs. The notion seemed far-fetched to me when I first heard about it, and by the time I made it through the first debunking sites, I would have been pretty skeptical that there was any real issue to discover there were it not for the dedication of a few individuals to wake me up.

Fast forward to 2014, where we now have over 36 million sites addressing the issue of the Federal Reserve, many of which are intelligent analyses and critiques of a corrupt system of counterfeit finance that has left the country and an alarming percentage of its citizens in debt slavery. It’s an acknowledged fact that Brazil, Russia, India, China and South Africa (known as the BRICS countries) have now formed their own bank to bypass the stranglehold of the Federal Reserve and the World Bank, and people from all walks of life acknowledge the corruption of the system and the need to get out from under it.

As for UFOs and the military, while debunking sites abound, there are enough credible documents and confessions from high-level government, military and FAA insiders to inspire any sincere researcher to look further, and to recognize that the subject warrants serious inquiry and public dialogue.

What happened? What did it take to overcome a disinformation campaign and legitimize once-denied information and insight? My guess is that it took persistence, and a lot of people with a strong enough desire to get to the truth that simple mudslinging and cheap dismissals weren’t enough to squelch their questioning… despite the discomfort.


By the time we made THRIVE, Kimberly and I had both experienced the discomfort of asking socially-taboo questions and looking into controversial topics enough to recognize the power of disinformation campaigns. We suspected we would be the target of some likely disinformation and reputation-bashing when we released THRIVE. And of course, that turned out to be true.

THRIVE covers at least 14 topics that stretch the status quo. For most people, there is at least one theme that’s challenging to consider. But in the nearly three years since its release, there is not a single fact in THRIVE that has been disproven. And yet the debunking of our film was rampant in its early days.

Now, virtually everything and everyone who is effectively challenging the banking elite’s agenda for global control will have sites or trolls actively debunking them and their message.


First, let me say again that there are skillful truth-seeking “debunking” sites whose priority seems to be accuracy and they seem to get it right almost all the time. And then there are hired hands who work for the government, for corporations, the intelligence agencies, the military and political parties. I have been assured of this by people formerly on the inside and here is a video clip that documents and verifies some of this.


These cyber mercenaries are called “trolls” perhaps because their behavior resembles the mythical mini-beasts who live under bridges and hassle innocent passers-by.



When working for these types of groups, their job is to find anything that might undermine the credibility and propaganda of their institutions and then attack the content — with either:

  • Disinformation.
  • Distraction.
  • Outright lies.
  • Trying to smear the credibility of the truth-teller. If none of that is effective, the next tactic is to make it unpleasant and unsafe for anyone to make positive comments, effectively.
  • Scaring enthusiasts away from the site or thread altogether.


  1. Vicious attacks against the person who is providing the information rather than the facts themselves.
  2. Name-calling and mud-slinging with no evidence.
  3. Malicious disregard for the value of public debate and discussion, as if to question or bring up an alternative view is to be shunned.
  4. No proposed solutions to the problem being discussed.
  5. Lack of facts or rational logic to support their argument.


Some real examples from our experience:

Attacking someone for their references, with a guilt by association campaign.


  • Notice the one eye in the THRIVE poster? It’s proof that Gamble is an Illuminati corporate shill!
  • Oh No! Not the Elite Minority! UFO’s, Free energy, NWO, Crop Circles, Chariots of the gods. This Documentary has all the BS rolled into one. All it needs is antivaxers and 9/11 truthers to be complete!
  • Anybody buying this pile of crap is too stupid to deserve to exist. Eat s—t and die!

And this from our cannabis healing blog from last week, that had over 100,000 views in the first 24 hours:

  • Thrive = Procter and gamble = elite = fluoridation = deceipt. [sic]

This came up a lot for us. THRIVE was never intended to be a political movie, so we had no concern for the political affiliation or lack thereof of any of the people we interviewed. Debunkers on the Left would say:

“Did you know they interviewed Ed Griffin and he was a John Bircher?”

While people on the Right would say:

“Did you know they interviewed Deepak Chopra and he is a complete liberal?”

While still others would say:

“They interviewed David Icke and he talks about reptilian ETs.”
(So his decades of documented research about the banking families must be useless?)

What if instead we consider the fact that THRIVE transcends political party politics and offers solutions based on principles that empower people globally? Since when does a reliable resource for information and insight have to be someone with whom you agree 100%?

If we are going to create meaningful solutions, then we need to listen for truth and documented information even if it comes from someone with whom we have other differences of opinion. And why dismiss someone with a vengeance because they have a different worldview?

The vehemence is often a sign of the debunker’s lack of legitimacy. Anyone serious about engaging in true discourse and finding real solutions can figure out a way to challenge respectfully and with evidence of a legitimate alternative perspective.

Another sign of intentionally misleading debunkers is their: Tendency to operate anonymously.

If you go to the website or Facebook page (if they have one) of these type of commenters…

  • Their identity is often obviously contrived.
  • They have little or no biography.
  • Virtually no friends.
  • Their face is obscured in pictures.
  • Often their names are straight from central casting like: Muertos, CraveHell666 or Arturobastard (seriously…these are genuine!)


In addition to those who get paid to do this, there are, of course, scads of lonely, frustrated, abused, duped, dumbed-down, medicated people who just vent their unexpressed anger through their keyboard in the middle of the night. They hide in the shadows of anonymity and vent against any target, particularly if a person or project seems to be having some of the success and influence that they have not been able to achieve.

We have noticed that the level of self-responsibility seems correlated with the level of anonymity of the one commenting — how much needs to be disclosed to have a commenting account. It’s lowest on YouTube, a little higher on Facebook, and highest of all on our own website.


Sometimes trolls and haters are best ignored. When they don’t attract much attention or achieve their desired effect, they often give up or move on to where they might. Other times, I find they must be called out and confronted directly with facts and especially with pointed questions. Usually they leave rather than have to think, research or admit they were making stuff up.

“Muertos,” the anonymous source behind the “THRIVE — Debunked” website, fit all the above criteria for being a hired troll. The conversations on his site began to expose and accuse him of this. Rather than issue a denial, he tried to deflect their accusations by instead inviting people on his site to take a poll of how many thought he was a troll. What? After a year, he announced the demise of the Thrive Movement, even though the film was still having over a million views a month and hundreds of solution groups were forming around the world in response to the coherence the film and website provided. In truth, it was his site that he closed down within days. Of course the remnants live on in search engines to confuse those who are uninformed about the nature of these distractions.

One site posted a scathing review of THRIVE that turned out to be based only on the trailer. The author hadn’t even seen the movie. To their credit, when I addressed the irresponsibility of this act on their comment board, they watched the movie, invited me for an interview, apologized and publicly mended the breach.

Others discredited us for being one political party or another — truly it came from all sides equally because we are not politically affiliated. Others completely ignored the transition strategies that we outlined in the movie and on the website and accused us of being elitists who don’t care about people because we do not believe increasing taxes is a sustainable solution.

Some have attacked me simply because one of my ancestors started a company that provided me with some inheritance. When challenged for being from a wealthy family, I have asked would-be debunkers, “If you won the lottery and suddenly became a millionaire, what would you do with the money to better our chances of thriving?” Some genuine inquirers have engaged the question creatively.

Another popular Internet radio host, who has acknowledged that he disseminates information for the government, launched into a completely false attack on me during a panel at a large conference. I was elsewhere giving my presentation, but someone who knows me was present and confronted him with facts while pointing out his divisive tone. To his credit, when I confronted him respectfully, but firmly, he apologized. I went on his show with the agreement I could address all the falsehoods. His listeners were very supportive and the incident was healed.

The lesson I have gained is that sometimes, when we have confronted the debunkers without malice or personal attack, but with clarity, we have often been able to raise the level of interaction.

We have been especially gratified and encouraged to see our own vast and diverse network responding intelligently, knowledgeably and, in most cases, respectfully, to mitigate or drive away cruel, uninformed attacks. At this point, there are very few trolls who spend much time commenting on our movie or movement in negative ways, because the facts have spoken for themselves, our reputations and integrity have held up to extreme scrutiny and our network is discerning and protective.


We each need to seek truth and live with integrity to be ultimately unassailable.

I learned from training and teaching the non-violent martial art of Aikido and from being a conflict resolutions facilitator in Silicon Valley that when confronted by an attacker, it is possible to respond non-violently by either DISENGAGING from the energy, or — if it is persistent and won’t go away — by IMMOBILIZING it with exposure and truth.

Sometimes we must be pro-active and strategic to make sure the lie doesn’t get perceived and repeated as truth. We can turn their own aggression back on the perpetrators — nonviolently — assuring that consequence clarifies their own choice of falsehood or truth, pain or love. This way we can model what we are after, while honoring the essential and courageous work that is called for to expose the true nature of the problems we face and cultivate the discerning open-mindedness that real solutions require.

Please let us know your insights with this troll and debunking challenge and what has worked for you in dealing with it.

Net Neutrality, Filtering and Ferguson, MO

Zeynep Tufekci medium.com Aug 14th 2014

Ferguson is about many things, starting first with race and policing in America.

But it’s also about internet, net neutrality and algorithmic filtering.

It’s a clear example of why “saving the Internet”, as it often phrased, is not an abstract issue of concern only to nerds, Silicon Valley bosses, and few NGOs. It’s why “algorithmic filtering” is not a vague concern.

It’s a clear example why net neutrality is a human rights issue; a free speech issue; and an issue of the voiceless being heard, on their own terms.

I saw this play out in multiple countries — my home country of Turkey included — but last night, it became even more heartbreakingly apparent in the United States as well.  ZeynepTwitter

For me, last night’s Ferguson “coverage” began when people started retweeting pictures of armored vehicles with heavily armored “robocops” on top of them, aiming their muzzle at the protesters, who seemed to number a few hundred. It was the fourth night after an unarmed black man, Michael Brown, was shot by a — still unnamed — police officer after a “jaywalking” incident. Witnesses say he died hands in the air, saying “don’t shoot.

The first night Mike Brown was shot, a friend asked on Twitter whether this would ever make the national news. It deserved to be national news as multiple significant, ongoing crises intersect at Ferguson: the loss of jobs which hit these communities worst; the militarization of US police departments; race; chronic multi-generational poverty.

But those very factors often make it less likely such places make the news, except as trouble spots. Places to be ignored. Avoided. “We” hear it only through official statements, often dismissing local concerns, painting them as looters, thugs, troublemakers.

Yes Ferguson will make news, another friend tweeted, because… well, here you go: Twitter.

It seems like a world ago in which such places, and such incidents, would be buried in silence, though, of course, residents knew of their own ignored plight. Now, we expect documentation, live-feeds, streaming video, real time Tweets.

I watched this interaction online. When local the police department in Ferguson showed up at the first vigils for this young man with dogs, the outrage spilled over to people who may not have been following it the first day. When night after night, reports of tear gas came in, more national journalists went to the area, as well as more residents turning on their cameras, deliberately. More and more people started talking about this.

Yesterday, national journalists were harassed, assaulted, arrested — without paperwork — while sitting quietly, recharging their phones at McDonald’s — captured on video. Police positioned like snipers on top of armored, anti-mine vehicles kept their rifles — I have no idea what kind — aimed at protester within full view of national media, in broad daylight — pictured from multiple angles.

This unfolded in real time on my social media feed which was pretty soon taken over by the topic — and yes, it’s a function of who I follow but I follow across the political spectrum, on purpose, and also globally. Egyptians and Turks were tweeting tear gas advice. Journalists with national profiles started going live on TV. And yes, there were people from the left and the right who expressed outrage.

I write and talk often about protest over-policing in multiple countries, so the topic was not a new one to me but I saw many people who I know don’t necessarily follow this day-to-day (and don’t condemn — not everyone can, or should, follow every worthy issue) start talking about it.

And this is what happened to “Ferguson” on Twitter:


And then I switched to non net-neutral Internet to see what was up. I mostly have a similar a composition of friends on Facebook as I do on Twitter.

Nada, zip, nada.

No Ferguson on Facebook last night. I scrolled. Refreshed.

This morning, though, my Facebook feed is also very heavily dominated by discussion of Ferguson. Many of those posts seem to have been written last night, but I didn’t see them then. Overnight, “edgerank” –or whatever Facebook’s filtering algorithm is called now — seems to have bubbled them up, probably as people engaged them more.

But I wonder: what if Ferguson had started to bubble, but there was no Twitter to catch on nationally? Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I cannot be sure.

Would Ferguson be buried in algorithmic censorship?

Would we even have a chance to see her?

This isn’t about Facebook per se—maybe it will do a good job, maybe not—but the fact that algorithmic filtering, as a layer, controls what you see on the Internet. Net neutrality (or lack thereof) will be yet another layer determining this. This will come on top of existing inequalities in attention, coverage and control.

Twitter was also affected by algorithmic filtering. “Ferguson” did not trend in the US on Twitter but it did trend locally. [I’ve since learned from @gilgul that that it *briefly* trended but mostly trended at localities.] So, there were fewer chances for people not already following the news to see it on their “trending” bar. Why? Almost certainly because there was already national, simmering discussion for many days and Twitter’s trending algorithm (said to be based on a method called “term frequency inverse document frequency”) rewards spikes… So, as people in localities who had not been talking a lot about Ferguson started to mention it, it trended there though the national build-up in the last five days penalized Ferguson.

Algorithms have consequences.

Mass media, typically, does not do very well covering chronic problems of unprivileged populations, poor urban blacks bear the brunt of this, but they are not alone. Rural mostly white America, too, is almost always ignored except for the occasional “meth labs everywhere” story. But yesterday, many outlets were trying, except police didn’t let them. Chris Hayes says that police ordered satellite trucks off the area so that they could not go live from the area. Washington Post was only one outlet whose journalists were arrested — citizen journalists were targeted as well.

On the scrappy live feed kept up by frequently tear-gassed, coughing citizen journalists, I heard the announcements calling on them to “turn off their cameras.”

Continue reading

How to Stop Facebook from Using Your Browsing History

Rachel Grussi | GaiamTV | July 30 2014

Raise your hand if you like Corporate America following your browsing history and using it as targeting for advertisers! Anyone? No?

Well, Facebook is changing their game, regardless of how you feel about it. Facebook has announced that it is going to use app and website data from your browsing habits to provide more targeted ads. In part, it’s nothing new; Facebook has been watching the online behavior of all 1.28 billion monthly users, thanks to all that “Like” action we’ve been taking, whether it’s on a friend’s picture of their dogs or your favorite brand of nut butter, but it’s now going beyond even this technological invasion.

Why is this such a big step? In times past, it used to be only your declared interests (Liked pages, profile information) that directed Facebook’s advertising missiles. Now, Facebook is going even further and using the passive data, meaning where you go on your computer and mobile devices, to make its own ads smarter. Have you ever noticed the little thumbs up button on sites that are totally unrelated to Facebook? This is the cue that now, if you’re a Facebook user, you can be recognized on sites where the button is encoded, whether you’re Liking something or not. While Facebook won’t be adding users tracked via desktop likes to the targeting mix at present, according to Facebook’s VP-ads product marketing, it’s in the works and you can expect more to be coming soon.

For now, the game plan is that it will capture websites that use Facebook’s conversion tracking pixel, placed by advertisers to see if their Facebook ads are yielding sales and traffic, plus mobile apps that use Facebook’s software development kit to deploy Facebook services, like the log-in. Any websites and apps that have Facebook’s tracking software encoded to retarget their visitors are also in on it. Impressions tracked via the Like button encoded in mobile apps, which Facebook recently introduced at its f8 conference for developers, will also be included.

All of this will be deployed in a matter of weeks. The long and short of it is, if you like at a dress once on a site like Macy’s, you’ll probably see the exact same dress again in your newsfeed. On top of this, it’s a little bit unnerving, because Facebook will be blithely ignoring the do-not-track settings on web browsers, all because “currently there is no industry consensus.” Some social media sites do honor it, like Twitter and Pinterest, while Google and Yahoo don’t.

It’s not all advertising doom and gloom, though; more control is coming to the ads you see, as well. Soon, Facebook will add a drop-down menu to ads which will allow you to remove a brand from your ad interests, the only issue being that there is a LOT of brands out there.

For now, though, Facebook points out that you’ll have to opt-out with the Digital Advertising Alliance to stop your browsing habits being shared with advertisers. Here’s Gizmodo‘s guide on how to get it done (Note: if you’re using AdBlocker Plus or anything else that disables cookies, you’re going to need to turn that off before you’ll be able to opt out):


When you get to DAA, you’ll see the above screen. Pick the “Companies Customizing Ads for Your Browser” tab, and scroll down until you see Facebook.


Select the little check box next to Facebook. While you’re in this page, be sure to check off any other sites you’d like to stop using your browser history. Once you’re done, hit “Submit.”


Now that you’ve completed this for your computer web browser, you may want to think about doing the same for your mobile devices, seeing as how we’re all constantly flicking through our feeds on our phones or tablets.

For iOS users, open settings and go to General>Restrictions>Advertising (under the “Privacy” section). Flip the switch for Limit Ad Tracking, and you’re all set.


For Android users, go to Google Settings>Ads>Opt Out of Interest-Based Ads, and that’s all you’ll need to do.


Technology is an irreversible part of our lives, but whether or not it will negatively or positively affect our futures will take remain to be seen.

Please help support CLN by trying out GAIAM TV for 10 days absolutely free! You can watch as many programs as you want during your ten day trial period. And, it is super easy to opt out at the end of your trial should you choose to do so. Choose channels ranging from Yoga and Fitness, to Health and Longevity, Spiritual Growth, Seeking Truth, Nature & Culture, and Inspired Films. So, why not give GAIAM TV a try, watch some incredible programs, and help out CLN in the process.  It’s a win-win.

Facebook Apologizes For Manipulating Your Emotions


The authors of a controversial Facebook study apologized after thier experiment to see if emotional states are contagious caused quite a stir over the weekend. READ MORE: Facebook apologizes after secret psychological experiments caused outrage among users

Facebook conducted a massive psychological experiment on nearly 700,000 users, manipulating their news feeds to assess the effects on their emotions.    The details of the experiment were published in an article entitled “Experimental Evidence Of Massive-Scale Emotional Contagion Through Social Networks” published in the journal Proceedings of the National Academy of Sciences. The short version of the study results is this: Facebook has the ability to make you feel good or bad by tweaking what shows up in your news feed.

Why There’s No Such Thing As A Private Facebook Chat

Lauren C. Williams  | Thinkprogress | May 8th 2014

facebook_cover4-69-e1399476927937-972x582If you think your messages to your Facebook friends are private, think again. The social network announced that it has plans to look at your personal conversations as a way to make more profits from targeted advertising.

Facebook has been a leader in data-mining, taking information from people’s profiles and studying their behavior to make money and improve the website. But its decision to delve into private content marks the next frontier for Big Data. Silicon Valley and big businesses alike have become increasingly reliant on data mining, which can predict election outcomes based on social media posts, or make a connection between what words people use and the weather.

In its quarterly investors conference call in late April, Facebook’s chief operations officer, Sheryl Sandberg, explained exactly why the company is going further to track your data: “Our goal is that every time you open News Feed, every time you look at Facebook, you see something, whether it’s from consumers or whether it’s from marketers, that really delights you, that you are genuinely happy to see.”

To do that, Facebook wants to take a look at your private messages. “Facebook historically has focused on friends and public content,” Facebook CEO Mark Zuckerberg said on the call. “Now, with Messenger and WhatsApp, we’re taking a couple of different approaches towards more private content as well.”

Your chats reveal more about you than you think

Private messaging has become an incredibly popular feature, which nearly every top social media app centers itself around. Snapchat, the picture-sharing app that automatically deletes pictures seconds after they’re sent, just added a one-on-one chat and video function similar to what Twitter and Google already have.

“People are making more intimate connections now than ever before just by chatting through a window on a screen,” Ramani Durvasula, PhD., a Los Angeles-based psychologist at California State University, Los Angeles told ThinkProgress.

Those private conversations are rife with details that may seem insignificant on the surface but provide valuable insight into a person: “What people share via a private chat and what they share in a status update are vastly different,” Durvasula said. That’s what makes personal conversations “the best place to get data because it’s uncensored.”

People are already generally uninhibited online, sharing everything from their emotional ups and downs, to live-tweeting childbirth. But what’s said one-on-one pulls back another layer, exposing what truly makes one tick — “The stressors people share, the intimacies, give insight to what people are most passionate about,” Durvasula said.

Private chats online also tell companies like Facebook how you use technology, what kinds of information you share on which platforms and with which audiences. “Some people use it much more for one-to-one communications than they would use the other parts of Facebook,” Augustin Chaintreau, assistant professor of computer science at Columbia University in New York, explains. For example, Facebook may be interested in seeing whether certain users prefer emailing or texting loved ones, and only use its Messenger app to keep up with more tangential relationships. Or the data could be used to tell whether someone was in distress or needed help, he added.

But there’s a risk in trying to piece together a profile of a person based on their online habits, Chaintreau said. “The risk is that there is a natural reason that people do different things in different places. You’re a different person, have different behaviors. The emotions will be different. Even if you’re very consistently presenting yourself [across the Web], you may or may not like a particular message presented on one platform or app versus the other because it doesn’t fit who you are [or what you’re doing in that space].”

All of those pieces of conversations — telling a friend you went to the doctor Tuesday, where you stayed on vacation, the fight you had with your significant other — add up and paint a fuller picture of users, leading to better products and ads recommending clinics, hotels and relationship counselors, Durvasula said: “Everything you say, every character typed is being watched. So if you’re typing a [private] message at 1 a.m., that means you could get targeted for an Ambien ad.”

In the past, Facebook tracked what users didn’t post in status updates, and was able to determine which types of users self-censored the most.

Those are the kinds of details that give companies an advantage, Pamela Rutledge, PhD., director of the Media Psychology Research Center in Newport Beach, Calif., told ThinkProgress. “There’s monetary value in conversation. What do new moms worry about, and how does that change over the lifespan? So [as a company] you’re really stepping into the shoes of your customer. And what better way to do that then look through private conversations,” Rutledge said.

“People don’t realize what they’re putting out there,” Durvasula said. Almost everyone uses the Internet on the daily basis with more than 65 percent having a photo publicly posted online, according to a Pew study. Another one in two Web users readily have their email, birth date or old job posted publicly. Those numbers jump significantly when you look at teens’ use: Almost all teens use their real name, post their interests, birth dates and post pictures of themselves, Pew found. Over 70 percent have their school name and where they live.

And when it comes to personal conversations, even more could be revealed. “What if [a conversation] does reveal something about your medical or mental history? This could keep you from getting insurance or even a job.”

When private chats aren’t actually private

It’s common for tech companies, especially as they go public, to look for ways to make money through advertising. Twitter, which entered the stock market in 2013, recently bought its longtime data partner Gnipwith eyes for turning its user data into revenue. Since Zuckerberg took the company public in 2012, Facebook has been similarly ramping up its advertising efforts — running into privacy controversies along the way, including using users’ profile pictures without their permission to make the ads more relatable.

But the social network also has been strategically positioning itself to join the ranks of Google, which already reads your personal communications. In its privacy agreement, Google reserves the right to sift through users’ data as long as they’re logged in, including everything a person searches with Google.com, what videos they watch on YouTube, where users travel using Google Maps, and private chats and emails.

With widespread data collection and mining Google has run into legal trouble. The company has been dealing with several lawsuits regarding its email scanning, one of which accuses Google of reading children’s messages and tracking their Internet use through its education apps. Google is also waiting for the U.S. Supreme Court to decide whether collecting data through private, unencrypted Wi-Fi networks for Google Maps is legal.

Earlier this month, Facebook divorced its messaging function into a standalone app. People used to would send messages with their friends online within Facebook’s native app. The company has tweaked its chat function over the years, making it easier to navigate with features — like floating profile pictures to indicate pending messages on the home screen — that made new messages and conversations more prominent in the mobile app. But making Messenger its own app, which has a built-in camera for photo sharing and video messaging, helps Facebook better keep track of the data in those chats.

Facebook’s recent purchases — namely Instagram and WhatsApp — further exemplify the company’s commitment to personal messaging. Instagram, which Facebook bought just before it went public, added direct messaging to its app late last year, allowing users to privately trade photos, and adding to a wealth of data on every user. WhatsApp lets users send SMS messages practically for free to anyone who has the mobile app. The app isn’t very popular in the United States but has a half billion strong user base in Europe, India, Latin America and Africa, where Facebook is looking to expand. WhatsApp is expected to soon hit a billion users, making it a ripe source for digging into — not necessarily what people are saying — but what those millions of texts reveal about their habits and desires.

Facebook’s purchase of WhatsApp for $19 billion shows not only how serious the company is about private chats, but how much they’re worth.

The tipping point in the privacy debate

The reality is that it has become nearly impossible to keep your personal data from Internet companies. Google, for example, already collects millions of pieces of user data that rivals only Facebook which houses a complete network of friends, coworkers and family and their musings through statuses, link shares and picture uploads.

“We’re entering a social experiment where so many companies know so much about us and we’re in the dark,” Chaintreau said. People feel a familiarity with companies like Facebook that they use every day. “It’s almost as if they’re your friend.” But without being more transparent about what they’re doing with consumers’ data, that could change.

It’s a tradeoff: “If you want this convenient way to connect with 7 billion people, you have to give us your data,” Durvasula said. And people will generally go along with it: “A lot of people will give up a little bit of their privacy for the convenience, which can sometimes be helpful like Amazon’s ‘People who bought this also bought that’ feature,” Rutledge said. So the debate around privacy won’t be whether or not companies should be collecting such personal information, but what data customers let them collect.

Regardless of whether users see it as a big deal, Facebook’s private snooping may just push the privacy debate to the tipping point. Some people may say, “I don’t care that Facebook knows I like Chiquita bananas and Mercedes Benzes,” Durvasula said, or respond with more alarm, as Rutledge pointed out: “‘Oh my gosh they’re listening to my conversations with my husband!’”

“Most private conversations are about what you see in public anyway, you just feel they’re more appropriate for a limited audience,” Rutledge went on. But the bottom line is that having personal information in cyberspace slowly erodes true privacy, in part because companies like Facebook turn around and make money off it, said Durvasula, who advocates for not using Facebook.

[read full post here]

How Washington and its Allies Use Social Media to Topple Governments & Manipulate Public Opinion


On April 2nd the Associated Press released a report exposing how the U.S. government recently attempted to topple the Cuban government yet again. This time the plot hinged on the creation of a communications network called “ZunZuneo” which was essentially a primitive version of Twitter. The plan, which was cooked up by the US Agency for International Development (USAID) and the U.S. State Department, was to build up a large following of users and then push them towards revolt. The network was built using shell companies and financed through a foreign bank to hide their connection to Washington. The Obama administration defended the program saying that it “had disclosed the initiative to Congress”.  As shocking and absurd as these revelations may be to the general public, the truth of the matter is that this is just the tip of the iceberg. The U.S. government and its allies have been using the internet as a covert weapon for some time now. Much of the evidence of these activities got mainstream coverage, but the corporate media is very careful not to refer to that evidence in the context of current events. So let’s connect a few dots here….

Twitter Use Linked To Infidelity And Divorce, Study Finds

Phys | April 7th 2014

twitter_bird_logoTwitter and other social networking services have revolutionized the way people create and maintain relationships. However, new research shows that Twitter use could actually be damaging to users’ romantic relationships. Russell Clayton, a doctoral student in the University of Missouri School of Journalism, found that active Twitter users are far more likely to experience Twitter–related conflict with their romantic partners. Clayton’s results showed that Twitter-related conflict then leads to negative relationship outcomes, including emotional and physical cheating, breakup and divorce.

In his study, Clayton surveyed 581 Twitter users of all ages. Clayton asked participants questions about their Twitter use such as how often they login in to Twitter, tweet, scroll the Twitter newsfeed, send direct messages to others, and reply to followers. Clayton also asked how much, if any, conflict arose between participants’ current or former partners as a result of Twitter use. For example, Clayton asked: “How often do you have an argument with your current or former partner because of too much Twitter use?” Clayton found that the more often a respondent reported being active on Twitter, the more likely they were to experience Twitter-related conflict with their partner, which then significantly predicted  outcomes such as cheating, breakup and divorce.

“The aim of this study was to examine whether the findings of Claytons’ recent study, which concluded that Facebook use predicted Facebook-related conflict, which then led to breakup and divorce were consistent with another social networking site platform: Twitter.”

In his previous research on Facebook, Clayton found that Facebook-related conflict and negative relationship outcomes were greater among couples in newer relationships of 36 months or less. In his new research regarding Twitter, Clayton found these outcomes occurred regardless of duration of relationship.

“I found it interesting that active Twitter users experienced Twitter-related conflict and negative relationship outcomes regardless of length of romantic relationship,” Clayton said. “Couples who reported being in relatively new relationships experienced the same amount of conflict as those in longer relationships.”

[read full post here]