Facebook has been in seriously hot water for its data monetization model almost from the firm’s beginning. The Cambridge Analytica, election meddling and fake news scandals have turned up the heat.
Facebook’s problems aren’t limited to the public and government backlash that spans several countries; the firm faces potentially devastating legal threats too. On the surface, it appears to be a clear-cut issue: Social media and other tech companies must be reined in.
Certainly, the EU thinks so, as is evidenced by its new General Data Protection Regulation. However, despite the horrendous damages wreaked to date, the outlines of the social media problem aren’t quite clear, and neither is the fix.
When Data Is Your Chief Asset
Chief among the most concerning worries resulting from a long line of recent scandals are election-fixing, or at least election meddling, in several democracies. Very few citizens of those countries would consider it a good thing for a foreign power to use social media to sway elections.
Several countries, including the U.S., France and Germany, have determined that Russia-backed election meddling is a continuing threat, and that social media is at the heart of its preferred tactics.
One would think that the need to curb or end attempts to unduly manipulate election outcomes by a nation state or other outside entity — such as UK-based Cambridge Analytica — would be irrefutable. Certainly, Facebook sees the writing on the wall.
CEO Mark Zuckerberg has announced several measures to address heightened anxiety over its role. Facebook publicly apologized for the Cambridge Analytica data-sharing scandal and promised it would notify users if they were among the 87 million people whose data was “improperly shared” with the firm.
Facebook also promised to increase transparency and improve vetting of its political advertising and news providers.
Is that enough?
What’s at Stake
“Facebook and other technology firms are thus far proposing to fix the problem via self-regulation only — by setting up rules that they themselves would promise to follow, rather than being held accountable by some sort of legislative authority that would involve users having some sort of legal recourse,” said Jessica Baldwin-Philippi, assistant professor of communications and media studies at Fordham University.
“The problem with this is that, as we’ve seen, there is little accountability,” she told the E-Commerce Times.
In fact, Facebook did not act on the issues of election meddling and fake news until there was a massive public outcry, even though it was aware of the problems much earlier. The same is true of the illicit data sharing with third parties such as Cambridge Analytics.
Data monetization is Facebook’s business model. Facebook and some other tech firms exist solely to gather and sell everyone’s data, exposing users’ lives in increasingly more granular detail.
Facebook works hard to pull more intimate details about your life than what you voluntarily post on social media or release as exhaust while searching the Web. Among the most troubling data mining the company recently has done: its Child Predator Survey; and a secret effort to gather patient data from hospitals and other medical groups to add to what it knows about users.
Indeed, Facebook appears to respect no boundaries in its search to own an increasingly large hoard of personal data.
Facebook’s Usage Agreement “is 70 pages long,” noted Ronald Jones, a cybersecurity faculty member at Harrisburg University of Science and Technology.
The privacy and usage agreement from the Facebook company Masquerade specifies that it collects, mines and sells Facebook content, such as images of faces, he also pointed out.
“The Facebook agreements indemnify Facebook actions in selling/delivering/providing user related information to Cambridge Analytica, so their actions were legal. No US laws appear to be violated,” Jones told the E-Commerce Times. “Are tougher regulations needed for social networking? What about the first amendment? Also, who decides what is or is not acceptable for the social networking space?”
Freedom of speech means that it may be very difficult to curb the speech spewed by hostile nation states, or to stem the tide of fake news proliferating on the network, he added — and he isn’t the only one who thinks so.
“What is harmful content? Harmful in what way? To whom? And why? And what is fake news?” asked Richard Santalesa, founder of the [email protected] Group.
“News has been faked, or slanted, since the first stylus was put to a clay tablet,” he told the E-Commerce Times. “The Constitution and First Amendment don’t contain a right not to be offended, and there’s no such thing as a hate speech exemption to speech that’s otherwise protected by the First Amendment.”
Thus, regulating tech firms is a tough and perhaps unforgivable thing to do in the minds of many American patriots. Yet the traditional American claim that market forces will police bad behavior doesn’t hold true either.
The Lowly Position of the User
Take, for example, Facebook’s effort to gather patient data. The market had no knowledge of that until investigative reporters exposed it. Given that traditional news media outlets have been getting pounded as fake news, and actual fake news has been held up as truth by some others, how is the market to learn of such misdeeds or know whether a response is needed?
“What every person must understand is Facebook is not about people other than as its currency,” remarked Janice Taylor, CEO of Mazu.
“You, me, our children are tokens — data points that reinforce the money printing machine,” she told the E-Commerce Times.
“If we go away, Facebook loses its entire business,” Taylor continued. “Are Mark and Sheryl [Sandberg] really going to shut down the money printing machine? They may grease it, disguise it better, lie some more — but at the core root of Facebook/Instagram is [the desire] to print money for themselves and their shareholders.”
Even if Facebook has seen the light and truly sets out to self-regulate to an appreciable degree, there is nothing to hold it on that course over time.
“EU-style rules about data privacy would be a fine step,” suggested Fordham’s Baldwin-Philippi, “but again, Facebook could always change that policy in the future — as it has many times before. Relying on technology firms to regulate themselves strips users of recourse if and when something goes wrong.”
Actual laws spelling out data ownership could go a long way in solving this problem for users — but that might mean the end of Facebook and other social media companies, since their business model centers on their ownership of users’ personal information.
“In the U.S., the people do not own their personal information, while in the EU the people have undisputed ownership of the personal data,” explained Harrisburg University’s Jones.
While Americans presumably will be safer with protections in place, and so will democracy, many may not want that protection.
“People think that if I am not on Facebook I can’t build my business,” noted Mazu’s Taylor.
“What about my family memories? My calendar of events?” they might worry.
“We as people need to understand that Facebook was never about you or I or connecting people — it was about money and control,” Taylor emphasized. “Why do we think they care more about us now that they are getting caught? Does a drug dealer suddenly care about all the drug users once he is arrested? What if the drug dealer just makes better cocaine. Should we trust him then?”
Facebook has been scrambling to win back the public’s trust since the Russia and Cambridge Analytica scandal. However, it’s not clear what exactly hostile nation states have been up to on social media. For example, has Russia merely been making an opportunistic play on Facebook and capitalizing on users’ gullibility? Or have we, the American public, been targeted as the virtual victims in a cyberwar?
There are casualties and victims of election meddling. For example, the election of an anti-immigrant candidate in a conflict zone could result in refugees being turned away and left to die. A war hawk might spin up new wars or scale up existing ones, resulting in casualties in all affected countries.
Conversely, meddling to get someone elected who would lift trade sanctions or otherwise favor the interfering country could reduce the number of people adversely affected by shortages. All of this is to say that yes, elections have consequences. They affect people in the real world, and often all around the world.
That being the case, might Russia’s meddling with the U.S. presidential and other elections constitute acts of war? Have we become pawns in a cyberwar, or even casualties of a sort?
“Russian meddling in U.S. elections is a serious problem, and their fake news on Facebook may be illegal if it is intended to sway elections,” said W. K. Kellogg Associate Professor at the University of Michigan School of Information.
“However, I would hesitate to call it ‘cyberwar.’ If political messaging to influence another country’s population is cyberwar, then America’s ‘Voice of America’ radio programming overseas would be cyberwar,” he told the E-Commerce Times.
It appears that the U.S. government agrees that cyberattacks and social media manipulations do not in themselves constitute a state of war. The Department of Defense Law of War Manual defines the proper labeling of various aggressions and delivers guidance to the U.S. Armed Forces on such matters. This extensive tome addresses digital attacks with physical impacts, such as an attack on a dam or a power grid, but it doesn’t consider theft of personal data or defacing websites as an act of war.
Others agree there must be a physical element with the cyberattack to qualify as cyberwar. Some argue that Russia’s attacks on voting machines could be that qualifying element.
“To fully understand the risk to voting, one needs to understand the full lifecycle or cyber key terrain of elections,” said Laura Lee, EVP for rapid prototyping at Circadence.
“In the case of nation state election voting, key terrain includes the vendor who manufactures the system, the voting registration database and software system, the end-point voting machine, and the back-end infrastructure that tallies and hopefully audits the system.”
However, attacks on voting machines may not qualify as physical attacks either.
“A cyberwar is determined when a nation-state carries out an offensive and aggressive attack on another nation; however, it is typically used to ensure that traditional weapons are more successful by taking out any defensive or intelligence capabilities,” explained Joseph Carson, chief security scientist at Thycotic.
“Cyber typically is used to weaken the target before carrying out other attacks,” he told the E-Commerce Times.
“We are currently in a cyberoffensive, and cyberweapons are being used; however, we are not quite yet in a full all out cyberwar,” Carson said.
“Cyber is typically only one element of war,” he pointed out. When cyberweapons are “combined with traditional weapons, then we can confidently say yes — we are in a cyberwar.”
Like most forms of nation-state sponsored aggressions, pinning the act to the right culprit is challenging — and if you can’t answer the whodunit question, it’s hard to pinpoint a cause.
“Misdirection is often used in cyberattacks to create conflict or wrong conclusions, so the victim of the cybercrime is continuously looking in the wrong direction and wasting resources chasing after another victim,” Carson explained.
“Attribution is actually one of the most difficult parts of cyberattacks. Without concrete and transparent evidence, we can only go on trust that the government has conducted effective digital forensics, and without doubt attribute those cyberattacks back to Russian government,” he added.
Given the well-documented mistrust between the Trump administration and the U.S. Intelligence Community (IC), it is unclear whether the administration would accept or act upon such digital forensics. Certainly, it has been loath to do so thus far.
However, the IC is unlikely to base attribution on digital forensics alone. It has other methods of deducing who is behind what action. Thus, its attributions are likely to be more certain than those of a private sector security research firm that does not benefit from government resources.
Unfortunately, the IC doesn’t like to reveal its methods in court as part of discovery, which is why it is rare for formal criminal charges to be leveled against nation-state sponsored culprits. Special Counsel Robert Mueller’s recent indictments of 13 Russians is one exception that proves the rule, but it is not the only exception. From time to time, the U.S. government has listed nation-sponsored cyberattackers on the FBI’s Most Wanted List.
In other words, nation-state sponsored cyberattacks and social media manipulations have been treated as crimes rather than as acts of war.
“Russia’s delivery of fake news is opportunistic. It applies Russian genius for propaganda to Western democracies’ open, data-enhanced Internet delivery channels,” said the University of Michigan’s Toyama.
Several countries have attributed recent election meddling to Russia, and have identified social media as Russian President Vladimir Putin’s biggest propaganda tools.
“Technology’s primary impact is to amplify underlying human forces. Facebook amplified the effect of Russian meddling, whose underlying causes are political,” said Toyama.
“There would be no Russian fake news scandal unless Russia were willing to fund the creation, dissemination, and targeted advertising of fake news abroad,” he said.
While several countries have agreed that social media was at the crux of recent Russian attacks on elections, their strategies for dealing with it have differed. The European Union has launched voter education programs and adopted the General Data Protection Regulation (GDPR) to help protect elections there. However, the U.S. has made no comparable effort, and it has no national data privacy protections at all.
Facebook had been working to appease the American public in the hope of avoiding stiff regulations. It recently suspended hundreds of apps in its effort to prevent another Cambridge Analytica user data abuse scheme.
However, those steps appear to have had little effect on restoring user trust.
Trust in Facebook has dropped by 66 percent since the Cambridge Analytica scandal and the downward trend has continued, a recent Ponemon Institute survey found.
It appears that the U.S. may need to follow the EU’s lead on voter education and regulations if elections are to be protected from malevolent foreign influences.
“Regulations at least as strict as those that apply to broadcast media with respect to elections should apply to the Internet and social media,” argued Toyama. “To begin with, there must be ‘know your customer’ rules for Internet advertising platforms — it’s not enough to accept ads from anyone with the money to pay for them.”