And the cherry on top: “Determining user preferences for advertisements from any one or more of audio signals within the vehicle and/or historical user data, selecting a number of the advertisements to present to the user during the trip, and providing the advertisements to the user during the trip through a human-machine interface (HMI) of the vehicle.”
Image Credit: Wirestock / Getty
SHARE
LIVE
Ford has filed a patent that is supposed to provide what the car manufacturer calls “in-vehicle advertising.” The solution is to eavesdrop on what’s being said in the vehicle, correlate that with location and other data, and serve “bespoke” ads.
The patent reveals systems and methods that would achieve the goal of targeting ads to car occupants, using their conversations.
Cars have long since become yet another common place where people’s privacy is being slowly eroded – but some observers are now wondering if the latest idea out of Ford may be “crossing the line.”
Whether or not Ford’s patent, which takes intrusive advertising practices to another level, would face any type of backlash from buyers of their machines remains to be seen; but even some advertising professionals are worried things may be getting out of hand at this point.
What makes this scenario significantly different from users acquiescing to having their personal data hoovered up by large online platforms and enduring (even when “targeted” often irrelevant) ads in order to use those platforms “for free” – is that they have already paid for their car.
But now they are expected to subject themselves to a new level of surveillance, and keep paying – and with some very dear currency. Namely, extensive information about their communications, locations, direction – and intentions.
The patent indicates where the automotive industry would like to be heading – tapping into the vast amounts of money swirling around the murky ad industry while disregarding their customers’ basic interests.
But, Ford made sure to point out that applying for a patent doesn’t mean it will be implemented, nor, according to a spokesperson, should this one be viewed as “an indication of our business or product plans.” (Shouldn’t it, though?)
Another point Ford tries to make is that this is also about building an intellectual property portfolio. But that just reaffirms suspicions that the car industry may indeed be moving in the radically dystopian direction outlined in the patent.
Just in case it does become a feature in Ford cars, here’s what it would take to determine “vehicle information”: location, speed, drive mode, user data such as route prediction, destination, etc.
And the cherry on top: “Determining user preferences for advertisements from any one or more of audio signals within the vehicle and/or historical user data, selecting a number of the advertisements to present to the user during the trip, and providing the advertisements to the user during the trip through a human-machine interface (HMI) of the vehicle.”
Whitney Webb says she does not trust Elon Musk as he is a military and intelligence contractor for the US government and built the satellite spy network. Jimmy Dore pointed out that he “bends the knee for Israel.” Webb explained that Musk bought Twitter, not to promote free speech, but to collect user data. His goal has been to verify all humans and link government-issued ID to Twitter accounts.
She said that Peter Thiel’s Palantir decides who goes on the domestic terror watchlist for the CIA based on individuals’ internet activity. In the second video, Webb linked all of the heads of the major social media platforms with Jeffrey Epstein.
Last week, Reddit user Brazedowl received a curious notification on his phone: Google was telling him that a smoke detector in his home had gone off. Brazedowl, a teacher from North Carolina who goes by Drew in real life, knew about the smoke alarm — he was at home himself and had just fried some sausages in his kitchen. But up until that moment, he had no idea that his smart speaker was able to detect such events. “Google just made my dumb smoke detectors smart,” he wrote on Reddit. “Pretty rad.”
A Google spokesperson told Protocol that the feature was accidentally enabled for some users through a recent software update and has since been rolled back. But in light of Monday’s news that Google invested $450 million — acquiring a 6.6% stake — in home security provider ADT, it may be a sign of things to come for Google, as it hints at the company’s secret home security superpower: millions of smart speakers already in people’s homes.
Once the deal closes, ADT’s more than 20,000 installers will also sell Google-made smart displays, security cameras and other hardware, and ADT will more closely integrate Google technology into its own home security offerings. “The goal is to give customers fewer false alarms, more ways to receive alarm events, and better detection of potential incidents inside and around the home,” Google Nest VP and GM Rishi Chandra said in a blog post.
Brazedowl wasn’t the only Google smart speaker user receiving a possible preview of this kind of incident detection in recent days. Other Reddit users reported getting security alerts after breaking glassware, as well as some false alarms triggered by sounds like popped bubble wrap and high-frequency noises that could be confused with a smoke alarm.
When Reddit user Brazedowl fried some sausages last week, accidentally setting off a smoke alarm, his Google Home smart speaker sent alerts to his phone.Screenshot: RedditGoogle announced support for the detection of “critical sounds” for paying subscribers of its Nest Aware home security subscription service in May. “Your Nest speakers and displays will notify you if a critical sound is detected, like a smoke alarm or glass breaking, by sending an alert to the Home app,” the company wrote in a blog post. “From there, you can hear an audio clip or listen live within the Home app to confirm the alarm.”
“A recent software update enabled these alerts on some of our speakers that didn’t have a subscription, but we’ve since rolled that back,” a Google spokesperson told Protocol last week. The spokesperson declined to comment on whether Google had any plans to bring the feature to users without subscriptions in the future. Google did announce Monday that ADT customers would get access to Nest Aware over time.
On the one hand, there is some potential for a privacy backlash. Google has long told users that its speakers only actively monitor ambient audio for utterances of the “Hey Google” wake phrase. Any use of far-field microphones for other purposes, especially for users who didn’t sign up for advanced monitoring, could result in some consumers rejecting the device category altogether.
On the other hand, the feature does demonstrate how powerful Google’s smart speakers can be in the context of home security. Google sold around 30 million smart speakers and displays in 2019 alone, market research company Strategy Analytics estimated earlier this year. All of these devices feature powerful far-field microphones capable of detecting not only voice commands but also environmental noises.
Some of this functionality is less obvious than the detection of blaring smoke alarms. Google’s smart displays sense it if someone walks up to them by emitting and monitoring ultrasonic sounds. Right now, this is being used to change the size of fonts on the display, but the same technology could conceivably also be used to detect possible intruders and other movements inside a home.
Google isn’t the first company to rely on smart speakers for monitoring; rival Amazon also uses smart speakers to detect fire alarms and other sounds as part of its Ring Alarm security system.
But in a way, it is a very Google-ish approach to home security: The search giant has long made its own cameras and smoke detectors, with mixed success, and largely failed to make a mark when it introduced its very own home security system in 2017.
With its ADT partnership, Google now signals that it is happy to rely on others for the more mundane aspects of home security, including the huge workforce needed to install and troubleshoot window sensors and the like. Instead, Google is bringing to the table what it does best: advanced technology, including millions of cheap speakers with far-field microphones, ready and able to become smart home security sensors.
French Doctors Speak Out Against New Contact Tracing
Pam Barker | Director of TLB Europe Reloaded Project
A somewhat emotional video from a young practicing physician on Youtube, republished below, alerted us to this topic of medical snooping by the French government in order to allegedly stop / control / monitor the chain of infection (and us). Overall, this practice may become the next reason why we shouldn’t be visiting our family doctor or the local hospital starting from tomorrow, May 11, when our two-month(!) lockdown officially ends.
An MSM report of the French situation may be found here, but below are two actual doctors giving their feedback, one on video and one anonymously in writing. Readers may also be interested in this US article by Daisy Luther via Zerohedge on the rollout of contact tracing titled “Contact Tracer” And “Disease Investigator” Jobs Spring Up Across The Country.
So from tomorrow, the French family doctor (and the hospital doctor) will be the initiating person to identify a patient supposedly with covid, ask for the names and contact numbers of those people the patient has been in contact with (both within and without the immediate family), enter the information given into a centralized database, and do (unreliable) testing on the patient. At which point the non-medical staff of the French health insurance system will take over and send teams of people to test those contacts, hoping to find patient zero along the way. The initiating doctors themselves will get 55 euros instead of the regular fee of around 25 euros, plus 2 (or 4) extra euros for each contact name with a phone number.
The infection has likely been in France since at least October/November; confirmed cases were predicted to be going down around the time lockdowns were enforced in France (March 16) and the UK (March 23); Public Health England downgraded the severity of the disease on March 19. So is this all a case of a system and government justifying themselves to the public when, originally, they did absolutely nothing, telling us via the media that it was a Chinese problem? Likely the infection has been doing the rounds here for a while although it remains to be seen what kind of spike in cases will happen post-lockdown. Some government heads are expected to roll following the resumption of ‘normal’ life, so they must be anxious to be seen to be doing something. As well as finding a reason to implement Big Data surveillance systems on us.
The anonymous doctor below raises concerns about doctors being motivated to participate with a fee incentive; the reliability of the diagnostic tests (presumably the PCR test); confidentiality of a patient’s medical data by administrative, i.e. non-medical, staff; use and security of patient data entered in the system once it’s all finished; retaliation by people named as contacts who may be subject to confinement. And overall, the ethical problem of doctors turning in their patients to a bigger, data-driven system outside the normal bounds of doctor-patient relationship, as well as turning in names of people to the government who are not even their patients.
The doctor in the very short video below raises some additional points:
Covid is a notifiable disease so reporting these cases in and of itself isn’t a problem. (ER: It is in the UK, too, but should it be? Should covid, with a death rate comparable to ‘flu and mild or no symptoms in the majority of the population, be put in the class of diseases like cholera and TB?)
Yet a government bill is in the pipeline to create a new system of health data in the context of this crisis, without our input. Patient health information cannot be kept for more than a year, but it is likely to become a permanent system.
The national platform of patient data will be kept on Microsoft servers; the data will be drawn from hospitals, pharmacies, and patient files.
Will this result in health insurance costs going up for people? Will banks refuse loans to people? Will employers not re-hire people because their health data is accessible?
The young guy’s overall point below is – I’m a doctor, not a cop.
******
********
Philippe Jandrok’s Blog, 7 May 2020
Message from a Doctor Who Wishes to Remain Anonymous
ATTENTION! … TO BE WIDELY DISTRIBUTED! NEW DRAMATIC DIRECTION IN THE ONGOING MADNESS!… The total compromise of the SS in this so-called state of emergency, totally falsified and allowing all the most Orwellian excesses!
What I am reporting here is taken from a communication from the CGT Union of social security funds following a meeting with the national director of the fund, Mr. Nicola Revel, dated May 5, 2020.
It concerns the plan to mobilize the fund’s administrative employees (and not the fund’s medical personnel, who are supposed to be trained and protect the notion of medical secrecy!) to supposedly limit the spread of the post-lockdown virus.
It consists of the creation of a “brigade” (sic!) in the form of a telephone platform of 6,500 people at the national level, which they cynically call the “Guardian Angel Brigade” BAD (… Really, what a sense of humour!) supposed to carry out large-scale epidemic detection of the famous “contact cases,” identified by family doctors on the declaration of their Covid patients.
In order not to get rid of the increasingly invasive anglicisms, it is called “contact tracing”!
These agents will be employed 7 days a week, this by freezing their collective agreement, and with compulsory overtime, but not eligible for the scheme in question from 8am to 7pm .
I remind us all once again (we can never hammer it home enough) that the covid tests have no validity, with a positive predictive value between 30 and 50%, which means that one can be a carrier of the virus and negative on the test, just as one can be declared positive even without having contracted the virus!! It is, however, on this basis that the policing will be carried out with an insane stigmatization of anyone!
But in high places, it causes no remorse!
The “contact tracing,” in fact, is old-fashioned: the family doctor diagnoses an infected patient. He tests him with a virological test, takes care of him and organizes his confinement as well as that of his close entourage. 3,000 to 5,000 cases will be expected per day starting May 11th according to Santé Publique France.
The doctor registers his patient in Ameli Pro, with his consent within 24 hours.(ER: Ameli.fr is the website portal for all health insurers in France.)
He receives 55 €, plus 2 € per individual name (4 € with a phone number) entered in Ameli Pro, with whom the patient has been in contact, outside the rules of physical distance and protection.
(ER: From another report, the 55 euros includes the normal 25 euro family doctor consultation fee.)
Mr Revel considers that the professional secrecy to which the employees of the Sécu are bound is sufficient to guarantee data protection. No details are given on what will happen to the data collected on Ameli Pro after the crisis is over.
On the other hand, it is confirmed that its twin, the SITEP tool (operated by DGS/AP-HP/Santé Publique France) will (together with the results of serological tests carried out in laboratories) make it possible to carry out epidemiological studies under cover of anonymity. As the CNIL has not given its opinion on the nature of the files created, it will arrive after the battle. So much the worse, when it comes to health data, as well as labour law – it is a matter of urgency!
Behind Ameli Pro, the agents will take over from the doctor to contact the “contact cases” by telephone. Their mission will be to convince everyone to get masks from the pharmacy, do a laboratory test and go into isolation while waiting for the results, with a work stoppage as backup.
In addition to the research and the relationship with the “contact cases”, the colleagues will also issue work stoppages.
Unanticipated risks to patients zero :
Patients (“patient zero”) who are the source of the trace will have the right to have their identity withheld from individuals who will be identified and quarantined. But only if they ask their doctor not to check the “does not wish to be identified” box in Ameli Pro. There is a risk there.
If claiming to be a known person (ER: an ‘infected’ person?) can make it easier to quarantine a third party, it could lead to retaliatory measures. There are environments where “snitching” is a serious thing. You have to be aware of this reality when you’re doing population tracing.
I’ll stop here. It’s edifying enough for anyone who still has their common sense. Not to mention that if medical ethics still had any meaning that was not misused, all doctors would have to resist and oppose the implementation of such a nightmare. But hey, most of them have seen their incomes drastically reduced during this epidemic! Yes, they have! It’s strictly attested to! And so… A big increase in income after a famine, it can be tempting!!…
Are you one of over two billion people that use Facebook, the world’s largest social media site?
Facebook has become so deeply ingrained in people’s lives that it has now become the norm to give it access to personal data without much thought, as if this is but a small price to pay for Facebook’s “free” service. But nothing could be further from the truth.
These traceable and sellable data now give Facebook the power to manipulate what we do, how we feel, what we buy and what we believe. The consequences of giving Facebook this much power is only becoming apparent, with mounting lawsuits against their security breaches and lousy privacy settings.
Even CrossFit, the well-established branded fitness regimen, has decided to stop supporting Facebook and its associated services, putting all their activities on Facebook and Instagram to a halt starting May 22, 2019. This decision came in the wake of Facebook’s deletion of the Banting7DayMealPlan user group, which was done without warning or explanation. The group has more than 1.65 million members who post testimonials regarding the efficiency of a low-carb, high-fat diet.
Although the group was later reinstated, Facebook’s action still shows how it acts in the interest of the food and beverage industry. You see, big advertisers on Facebook, like Coca-Cola, don’t want you to have access to this information, and Facebook is more than happy to ban anyone challenging the industrial food system. By doing this, it potentially contributes to the global chronic disease crisis.
Would you continue trusting a company that thinks too little of violating your rights to privacy?
1Facebook’s Primary ‘Product’ Is You
If you think Facebook’s product is the very platform that users interact with, you’re wrong. You are actually Facebook’s primary product. The site makes money off you by meticulously tracking your hobbies, habits and preferences through your “likes,” posts, comments, private messages, friends list, login locations and more. It sells these data, along with your personal information, to whomever wants access to them, potentially facilitating everything from targeted advertising to targeted fraud — this is its entire profit model.
Did you know that it can even access your computer or smartphone’s microphone without your knowledge? So if you’re suddenly receiving ads for products or services that you just spoke out loud about, don’t be surprised — chances are one or more apps linked to your microphone have been eavesdropping on you. These privacy intrusions can continue even after you’ve closed your Facebook account.
Companies can also collect information about the websites you’re visiting or the keywords you’re searching for outside of Facebook’s platform without your permission, and then sell these data to Facebook so it knows which ads to show you. This makes Facebook the most infamous advertising tool ever created, and to increase revenue, it has to continue spying on you.
During Facebook’s early days, its founder, Mark Zuckerberg, assured in an interview that no user information would be sold or shared with anyone the user had not specifically given permission to. However, the site’s blatant disregard for its users’ privacy proves otherwise. In fact, Facebook has been repeatedly caught mishandling user data and lying about their data harvesting, resulting in multiple legal problems.
The origin of Facebook is also far from altruistic, even though it’s said to be created “to make the world more open and connected,” and “give people the power to build community.” A front-runner to Facebook was a site called FaceMash, which was created to rate photos of women — photos that were obtained and used without permission. Some of the women were even compared to farm animals! This speaks volumes about Zuckerberg’s disrespect for privacy. Facebook is basically founded on a misogynistic hate group and it should therefore ban itself.
2Facebook Faces Investigation for Its Lax Security and Privacy Practices
Facebook is currently facing a number of lawsuits regarding its controversial data-sharing practices and poor security measures. Back in 2010, the U.S. Federal Trade Commission (FTC) revealed that Facebook was sharing user data with third-party software developers without the users’ consent, expressing concerns about the potential misuse of personal information, as Facebook does not track how third parties utilized them.
While Facebook agreed by consent order to “identify risk to personal privacy” and eliminate those risks, they did not actually pay attention to their security lapse. Had they done so, they would’ve been able to prevent the Cambridge Analytica scandal, the main focus of FTC’s first criminal probe. This issue involves Facebook’s deal with a British political consulting firm, allowing it access to around 87 million user data, which was used to influence public opinion in the U.S. presidential election.
Another criminal investigation into Facebook’s data sharing practice is underway. This time, it revolves around Facebook’s partnerships with tech companies and device makers, allowing them to override the users’ privacy settings and giving them broad access to its users’ information.
Amid federal criminal investigations, Zuckerberg announced the company’s latest plan to encrypt messages, so only the sender and the receiver will supposedly be able to decipher what they say. This is ironic, considering it was recently discovered that Facebook stored millions of user passwords in readable plaintext format in its internal platform, potentially compromising the security of millions of its users.
Zuckerberg has repeatedly demonstrated a complete lack of integrity when it comes to fulfilling his promises of privacy. In fact, in a 2010 talk given at the Crunchie awards, he stated that “privacy is no longer a social norm,” implying that using social media automatically strips you of the right to privacy, and that is why they do not respect it.
3Facebook Is a Monopoly
Facebook’s plan to integrate Instagram, Messenger and WhatsApp would turn it into a global super-monopoly. This merger has been criticized by tech experts, as it robs users of their ability to choose between messaging services, leaving them virtually no choice but to submit to Facebook’s invasive privacy settings. This also gives Facebook unprecedented data mining capabilities.
German antitrust regulator, Bundeskartellamt, is the first to prohibit Facebook’s unrestricted data mining, banning Facebook’s services in Germany if it integrates the three messaging platforms. If other countries follow suit, the merger would fall through, as it probably should.
One of the outspoken proponents of breaking up monopolies like Facebook, Google and Amazon is U.S. presidential candidate Sen. Elizabeth Warren, D-Mass. Her campaign to break up Facebook was censored by the site, taking down three of her ads with a message that said the ads went “against Facebook’s advertising policies.”
After Warren took to Twitter to comment how the censorship simply proves why her proposal was necessary, Facebook then reinstated her ads with the lame excuse that they were only removed because they included Facebook’s logo, which violates the site’s advertising policy.
I’ve Decided — I Am Leaving Facebook
At present, I have nearly 1.8 million Facebook followers, and I am grateful for the support. But a while back, I have expressed my concerns that perhaps I am doing more harm than good by being a part of Facebook, as I could be contributing to the invasive data mining, an idea that never sat well with me.
For those reasons, I decided that leaving the platform and going back to depending on email is the responsible way forward. If you haven’t subscribed to my newsletter yet, I urge you, your family and your friends to sign up now. I polled my audience and they agreed with my decision to leave.
Kids born in 2019 will be the most tracked humans in history. It’s predicted that by the time they turn eighteen, 70,000 posts about them will be in the internet ether. How and what you post about your child is a personal choice, but trusting that tech companies aren’t building dossiers on our children, starting with that first birth announcement, is a modern-day digital civil right we need to demand. As a mother myself, I want my children’s privacy to be a priority for tech makers.
I used to feel pretty lonely in that endeavor but over the last 12 months, I’ve noticed a trend: more and more people are talking about privacy. They’re calling out the companies that don’t take people’s online privacy seriously enough. They’re sharing articles detailing cover-ups and breaches. They’ve told me they want more privacy online and yet, feel trapped by the Terms of Service of the big platforms they need to use.
I think of this frustration as ‘digital wokeness’. And it’s the one good thing that came out of the Cambridge Analytica scandal. Though we’ve heard the reporting numerous times, let’s recall that from one personality quiz taken by 270,000 people, 87 million Facebook accounts were accessed. Tens of millions of people (maybe you) did not knowingly give permission for their information to be shared or manipulated by political operatives with questionable ethics.
We still don’t know exactly how this data collection and subsequent microtargeting of political content influenced our democratic process. But Cambridge Analytica is just one example. Everyday we hear about another undisclosed data breach. Private information being collected, sometimes sold, and given away without our knowledge or consent. CEOs sit before Congress saying they will “do better” while stories continue to break about negligence and wrong-doing.
Just what exactly is happening?
Breaches are just a symptom of the problem. The fundamentals of the relationship between customers and these companies are broken. I recently took the helm of the podcast IRL: Online Life is Real Life and spoke to Shoshana Zuboff, author of The Age of Surveillance Capitalism who explained further how most tech companies have built their businesses on the data they collect by tracking their users’ behavior. “We all need to better grasp what the trade offs really are, because once you learn how to modify human behavior at scale, we’re talking about a kind of power now invested in these private companies,” she told me. I know. The situation is messed up and it makes you want to put your head in the sand and give up on digital privacy.
Please don’t do that. Fixing our online privacy problem requires both individual and collective action. Support organizations pressuring Congress and Silicon Valley to begin to claw back our digital civil rights and take some simple steps right now to protect your families and send a message to tech companies.
Yes, doing these things is annoying and tedious but it does matter:
Be more choosy about your technology. There’s no need to go “off the grid,” but choosing products and companies that respect you and your data – like the Firefox browser and DuckDuckGo search engine – sends an important message to big companies that largely prioritize their shareholders over their customers. These smaller, user-focused apps and services have put ethics at the heart of their businesses and deserve to be downloaded.
Become a privacy settings ninja. Most sites and apps have privacy settings you can access, but they tuck them away several tabs deep. In a user-centric world, the default settings would take your privacy preferences into account and make them easier to update. Right now, as you’ve likely experienced, finding and adjusting your privacy settings is just hard enough that most of us give up or get distracted midway through trying to figure out what to click where. Gird yourself and press on! Try a data detox and reset your privacy options, step-by-step.
Educate yourself on how your data is accessed. Easier said than done, I know. That’s why I created a five-part bootcamp. The Privacy Paradox Challenge (from my Note to Self days) is a week of mini-podcasts and personal challenges that can help you get insight into how vast the issue is and how to get your privacy game on point.
On a recent episode of IRL, I spoke to Ellen Silver, VP of Operations at Facebook regarding the ever louder conversation about Facebook’s ethics. She assured me that Facebook is working to be more transparent. A few weeks later her boss, Mark Zuckerberg, made his 2019 New Year’s Resolution to “host a series of public discussions about the future of technology in society.” But we’ve heard promises from Facebook and other tech companies before. Let’s make sure they talk about privacy. Let’s continue asking all of the tech companies harder questions. And let’s start using our spending power to support companies that take our data as seriously as we do. Those are the next steps in this growing conversation about privacy. And that is indeed progress.
Manoush Zomorodi is co-founder of Stable Genius Productions, a media company with a mission to help people navigate personal and global change. In addition to hosting Firefox’s IRL podcast, Manoush hosts Zig Zag, a podcast about changing the course of capitalism, journalism, and women’s lives. Investigating how technology is transforming humanity is Manoush’s passion and expertise. In 2017, she wrote a book, “Bored and Brilliant: How Spacing Out Can Unlock Your Most Creative Self” and gave a TED Talk about surviving information overload and the “Attention Economy.” She was named one of Fast Company’s 100 Most Creative People in Business in 2018.
There is a growing consciousness about the desire to keep one’s messages private. Some are concerned about hackers, or worry about foreign or domestic government surveillance, but most people just agree with the general principle that what you say in your chat conversations ought to stay between you and the people you chat with.
It’s not a pleasant idea to think that your messages could be archived for perpetuity on a large company’s server or analyzed by some algorithm. The quest for privacy has birthed a whole generation of apps that promise to give you exactly that. Services like Telegram and Signal have turned the phrase “end-to-end encryption” into a popular discussion. We’re here to help you figure out what this is all about and which apps to try.
A little background on encryption
Before we look at some specific apps, here’s a very brief explainer. Essentially, end-to-end encryption means that only the sender and the recipient can read the message. The message is encrypted on your phone, send to the recipient, and then decrypted. This prevents prying eyes from the telecom providers, government agencies, and even the company that hosts the service itself from being able to read your messages. This means they wouldn’t have the ability to hand over messages even if they were subpoenaed to by a government agency. And if a hacker broke into the messaging service’s servers, they couldn’t get at your conversations.
The desire for end-to-end (E2E) encryption isn’t just about those who don’t want the NSA to spy on them. In practice, it’s just about a basic sense that messages should be private. With that in mind, you have to be aware that just because something has the word “encrypted” doesn’t mean it is end-to-end encrypted. Some services will encrypt the message between the endpoints of transmission; your conversations are stored encrypted on the messaging service’s servers, but since they encrypted them, they can decrypt them.
The services we’re looking at here all feature end-to-end encryption.
One of the most popular apps in this space is Telegram. It’s been a pretty hot app for a couple of years, which is like 20 years in app time.
The most painstaking part is you need to invite all of your contacts into your new, secret chat world through the app’s navigation menu. It’s the biggest problem with using over-the-top services, as it doesn’t have the ubiquity of SMS messaging.
Once you’ve done this, you can message people individually or create group channels for talking with an unlimited number of other users. The upside here is you can escape the limitations of MMS messaging that usually caps you at a particular number of people. Your group can even be public, giving you a mini social network without all the trolls that plague the likes of Facebook and Twitter.
The interface is a little barren, but Telegram makes the list for its robust privacy and offering native apps for iOS, Mac, Windows, the web, and of course Android.
Signal’s claim to fame is that it’s the preferred messaging application of Edward Snowden. It’s among the easiest to set up, as it automatically authenticates your number and can even be used as your default SMS app.
As with Whisper, you can create a group for private banter with an unlimited number of other users. Signal also makes phone calls, which I found to be very clear when testing it out in a couple of different cases.
Signal isn’t optimized for tablets, but the company says that’s on the product roadmap. The design is no-frills with color variation for different contacts to help you from sending the wrong chat to an incorrect contact.
Another good option is Wire. It offers some fun messaging tricks, like the ability to doodle, share your location, send images, or record a video. The app also includes a chat bot, Anna, which offers somewhat useful answers to various questions about how to use the app.
You can optionally create an account with your phone number, which makes setup and account deletion easy. Wire is great for one-on-one chats if you would prefer conversations with someone be off the record. But it doesn’t have the same type of social or group features found with some of the other offerings here.
You also can’t forget about the uber-popular WhatsApp. Like the others on this list, it promises end-to-end encryption so your messages stay private. The biggest advantage is that the service, which is owned by Facebook, has over a billion users. There’s a very good chance you won’t have to convince all your friends and family to download the app.
That shouldn’t be discounted, as one of the pains of moving to a messaging service is convincing everybody to jump aboard. However, WhatsApp is now owned by Facebook, a connection that could make some wary, especially since the social network recently announced it’d be using some account information, including phone numbers, from WhatsApp. If your goal is a high threshold of privacy, then it’s worth keeping an eye on.
If you want to see messages disappear before your eyes, then Dust (formerly Cyber Dust) is the way to go. The brainchild of Dallas Mavericks owner Mark Cuban, the messages can disappear in 24 hours or as soon as they’re read, based on your preferences.
The company spells out its encryption policy, and includes a couple other features to ease your mind like chats that don’t show usernames, so even if someone took a screenshot it couldn’t necessarily be attributed to you.
The best app for you is going to depend upon your needs. Secure messaging is a huge and growing area of consumer interest, but it’s worth the effort if staying secure is what you’re after.
Derek Walter is a freelance technology writer based in Northern California. He is the author of Learning MIT App Inventor, a hands-on guide to building your own Android apps.
If ever there was a red flag story about Amazon’s Alexa then this is it.
If you watch the “Alexa for Medical Care Advice” video posted below, you will hear Alexa asking Peggy, to “tell me about the symptoms or problems that are troubling you the most.”
Divulging your health issues to a private corporation is extremely troubling as you will see.
Let’s start with the obvious concerns and talk about something you will not see in the video.
Like Peggy telling Alexa, it is none of Amazon’s business what her health concerns are and Alexa should stop listening to everything she says.
But many Americans do not have an issue with Alexa listening to their everyday conversations and have no problem asking Alexa health questions. Because, ‘they have nothing to hide’ — and therein lies the problem.
I challenge anyone to walk up to a stranger while recording the conversation and ask them about their health issues and see what happens. And if you really want to see what happens ask them about their kids’ health issues, etc. Would anyone like to guess what their response will be?
So if a stranger refuses to discuss their personal health issues with someone they do not know, why on earth would they trust Amazon?
Earlier this month, Amazon officially introduced “Alexa Healthcare Skills” which transmits and receives personal healthcare information.
But Alexa Healthcare does much more than just transmit and receive healthcare information.
Alexa can now call pharmacies, spy on kids and your blood sugar.
Express Scripts (a leading Pharmacy Services Organization): Members can check the status of a home delivery prescription and can request Alexa notifications when their prescription orders are shipped.
Cigna Health Today (by Cigna, the global health service company): Eligible employees with one of Cigna’s large national accounts can now manage their health improvement goals and increase opportunities for earning personalized wellness incentives.
My Children’s Enhanced Recovery After Surgery (ERAS) (by Boston Children’s Hospital, a leading children’s hospital): Parents and caregivers of children in the ERAS program at Boston Children’s Hospital can provide their care teams updates on recovery progress and receive information regarding their post-op appointments.
Swedish Health Connect (by Providence St. Joseph Health, a healthcare system with 51 hospitals across 7 states and 829 clinics): Customers can find an urgent care center near them and schedule a same-day appointment.
Atrium Health (a healthcare system with more than 40 hospitals and 900 care locations throughout North and South Carolina and Georgia): Customers in North and South Carolina can find an urgent care location near them and schedule a same-day appointment.
Livongo (a leading consumer digital health company that creates new and different experiences for people with chronic conditions): Members can query their last blood sugar reading, blood sugar measurement trends, and receive insights and Health Nudges that are personalized to them.
A few reasons to be concerned about Amazon Healthcare:
1.) Amazon is a for-profit corporation that makes its money by putting listening devices inside people’s homes.
Bloomberg revealed that a global team of Amazon workers is listening to people’s conversations.
Amazon.com Inc. employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices.
An article at Medium warns: Amazon listens to everything.
Imagine your horror as you open the attachments and begin listening to the recordings: A discussion of what to have for dinner, two children arguing over a toy, a woman talking to her partner as she gets into the shower.
2.) Besides the obvious privacy concerns of putting Alexa in your home, Alexa can be easily hacked and turned into an eavesdropping device.
When the attack [succeeds], we can control Amazon Echo for eavesdropping and send the voice data through network to the attacker.
3.) Amazon’s Healthcare partners act as though listening to people’s conversations is an act of benevolence.
“We believe voice technology, like Alexa, can make it easy for people to stay on the right path by tracking the status of their mail order prescription,” said Mark Bini, Vice President of Innovation and Member Experience, Express Scripts.
Mark Bini got one thing right: helping “people stay on the right path” will mean an increase in corporate profits as they data mine everything said by you and your family.
Cigna’s claim that divulging your personal health issues to Alexa allows customers to receive ” personalized wellness incentives for meeting their health goals” is just another way of saying corporate spying.
“Personalized wellness incentives” is corporate jargon for sending you advertising or increasing a person’s health insurance premiums if they do not meet their health goals.
Amazon did not become the most valuable company in the world by helping people. The only reason why Amazon and its partners care about your healthcare is so they can profit from it.