Posted by: David Harley | February 13, 2024

Disclaimer

You’ll probably see ads under and possibly incorporated into articles on this blog.

I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…

If lots of people suddenly start viewing this blog, I’ll find some money for this one. (I already pay for dharley.com and whealalice.com!) But I’m not expecting a sudden rush of visitors after all these years.

David Harley

Posted by: David Harley | March 1, 2024

One Time Passcode scams

Yesterday’s useful advice from a TV commentator on matters of IT security can be boiled down to this: if someone sends you a one-time passcode and tells you not to share it with anyone, then it’s a good idea not to share it with anyone.

This sounds pretty obvious, but the background to that advice is worth recapping. (I started to write this yesterday, but got sidetracked!)

Having set himself up to receive a scam call regarding an unusual payment from his account, the commentator (I forget his name, but he’s one of the regulars on Scam Interceptors) was told by a scammer that she was going to send him a verification code so that she would know she was sending the refund to the right person/account. What she’d actually done was send a request to his provider to reset his password. The provider had sent a passcode to his email address, with a stern admonition not to share it. If he had shared it, it would have enabled the scammer to take over his account by diving in, changing his password, and taking whatever action suited her.

In this case, the provider was Amazon, but responding to requests to reset a forgotten password with a one-time passcode (OTP) is a very common layer of defence, and this is all too common a way in which scammers may try to circumvent it.

The real message here is don’t trust someone who rang you up out of the blue. If they’re genuine, they won’t object to your ringing them back on a number you know is correct – not, of course, a number that they obligingly give you in the course of the unsolicited phone call!

Posted by: David Harley | February 15, 2024

Watching the Furby Fly (an article resurrected)

[You’ll probably see advertisements inserted by WordPress into this article. I don’t choose them or approve them – in fact, I don’t normally see them – but they’re the price I pay for not being able to afford (at present) to pay for all my blogs.]

In January 2024, Snopes published an article ’90s Throwback: When Furbys Caused National Security Fears that indicates that they were unable to access some of the documentation referred to in the article, so categorized it as ‘research in progress’. I have no idea whether there’s a connection in the timing, but in February 2024 Bruce Schneier reported that NSA documents had been released following a Freedom of Information Act request: Documents about the NSA’s Banning of Furby Toys in the 1990s. This prompted me to dig out an article I wrote for Kevin Townsend’s IT Security UK blog site around 2012. This was a site where researchers were encouraged to post content independently of establishment vendors, and some highly-respected researchers posted some excellent content there. Unfortunately, the site was subject to repeated attacks and is no longer available, though Kevin himself is still writing quality content for other sites. 

I’m not saying that the following is excellent comment (or indeed that I was highly respected), but I still rather like it. It’s very lightly edited. 

Somehow, the Furby, a furry toy vaguely resembling a Mogwai (the cuddly pre-Gremlin version in Joe Dante’s films, rather than the demons of Chinese tradition1) has always invited a certain amount of paranoia, fuelled by (or perhaps fuelling) the interest of the hacking community.

As well as a fairly dumb and long gone discussion on the newsgroup alt.comp.virus about its potential as a virus vector, the details of which now escape me, it was the subject of a ban of sorts on airlines. More precisely, the Federal Aviation Authority recommended that ‘Furbys should not be on when the plane is below 10,000 feet’, and many airlines went as far as requiring passengers ‘to remove the batteries from their Furby dolls so that the electronic gizmos don’t interfere with navigational systems during takeoff and landing. This was as a result of the device’s being classified in the same group as electronic devices such as laptops, cellphones, electronic games, and personal music devices.

‘Personal stereos’ at that point probably meant portable cassette and CD players rather than iPods and other mobile devices, of which modern versions certainly qualify as full-blown computers with communication capabilities that were still seen as somewhat futuristic around the turn of the century. So perhaps it’s not surprising that airlines continue to extend bans and restrictions to more or less anything that could be described as electronic. Better safe than splattered, I suppose, however unlikely it is that any dire consequences might ensure. I certainly know people who have found that their phone had switched itself back on during a flight without any impact (so to speak) on their safe travel and arrival. Yes, one of them would be me… No statistics seem to be available on how many successful pocket calls have been made from 30,000 feet, however.

In 2002 I wrote in a paper for EICAR:

Furbys were recently banned in ‘spy centres’ because they’re believed to be a possible source of information leakage. Apparently security chiefs believed that they learned phrases spoken around them and that they might therefore repeat secret information, making them a security risk. My daughter and I have spent many happy hours trying to persuade her furby to say “My hovercraft is full of eels”, preferably in a Ukrainian accent, but have so far failed miserably. Neither the accompanying instruction manual nor http://www.furby.com seem to be aware of this splendid ability, but perhaps it’s undocumented, like the opcode which is supposed to enable a malicious hacker to burn out a Pentium motherboard.

This particular ban seems to have been based on the widely-held belief that Furby’s learn to speak English rather than their ‘native’ Furbish (yes, I know…) in much the same way that humans are assumed to learn, by repeating what is said to them. Which may or may not be what Tiger Electronics initially wanted its young customers to believe: in any case, the product description for the Furby Boom still tells them to ‘Talk to your Furby and interact with it to teach it English and shape its personality’.

However, when the story broke, its executives went out of their way to point out that Furbys had no recording mechanism. As for the learning process, it appears that the ‘learning mechanism’ and repetition of speech was based on reinforcement of uttering pre-programmed phrases, not learning through mimicry. Apparently, petting the toy when it spoke encouraged it to repeat the phrase more often, but the only thing it was learning was the listening preferences of its owner. It is apparently designed to introduce more pre-programmed English phrases over time in order to reinforce the false impression that it is actually learning English. In any case, it appears that the NSA rescinded its ban. I’m not sure if it carried out any investigation into the reading ability and gullibility levels of its own executives, or into whether NSA-employed Furby owners were offered alternative stress alleviation strategies.

So what about the ‘hacking’ aspect? Mostly, this is concerned with hacking in its old-fashioned, non-pejorative/non-malicious sense, in particular with manipulating the toy’s audio and sensory inputs for circuit bending, specifically (in this case) to generate audio effects. However, an article from December 2013 by Michael Coppola – Reverse Engineering a Furby – demonstrates a wider interest, specifically in the inter-device protocol used by recent models, and pointed to earlier research on the events it understands.

In spite of Coppola’s invocation of the dreaded #badBIOS, inspired by the use of an audio protocol that encodes data into bursts of high-pitch frequencies for communication between the Furby and an iOS mobile app (or with other Furbys) that brings to mind Dragos Ruiu’s claims – not universally accepted – of the existence of malware that (among other things) communicates between infected devices using ultrahigh speaker frequencies, I’m not seeing a malware-friendly supertool here, though the articles concerned are actually fascinating, in a nerdish sort of way. However, that didn’t stop Coppola’s research being cited as having discovered ‘ vulnerabilities in the way the toy communicates with other Furby toys and its mobile app’ in an article sensationally entitled Valasek: Today’s Furby Bug is Tomorrow’s SCADA Vulnerability.

I wasn’t at the Security of Things event where Valasek talked about Coppola’s work, of course, but what he actually said turns out to be a little less sensational.

‘…low-impact research cannot be dismissed either. Not every IOT vulnerability is going to be high impact. You have to judge how technology that might be vulnerable today will be used in the future.’

Nor was I at the events in 2014 where Coppola apparently talked about a ‘delicious 0-day’, but I presume that it was interesting but, as Valasek puts it, low impact. A lot of effort involving various highly corrosive acids and an electronic microscope doesn’t seem to have uncovered all of Furby’s furry little secrets. Moving from what may be known to the next big thing in SCADA hype may be premature, even if it does result in another Establishment panic attack at some point.

My daughter moved on from Furby and Tamagotchi quite a few years ago, but if I found one of my grandchildren with one, I don’t think I’d be ripping it out of his or her hands and looking for the nearest junkyard with a car crusher just yet. And while I’m not about to underplay the risks to national infrastructure [originally a link to a presentation to Infosec on behalf of ESET which has vanished from ESET’s servers], it’s all too easy for speculation to spill over into fantasy. [A link to a blog for ESET which is still there: re-reading it definitely gave me pleasure, as cheap sarcasm often does.]

David Harley

1The mythological basis of the Dante films is quite interesting in itself: the cuddly Mogwai share a name with demons that have a great deal in common behaviourally with the vengeful spirits of Chinese tradition. Even their methods of reproduction and mutation bear some resemblance. The name Gremlin seems to have originated in RAF slang of the 1920s (or possibly earlier), used to describe creatures deemed responsible for ‘inexplicable’ mechanical failures, the term passing into wider currency through a book by Roald Dahl.

Posted by: David Harley | February 14, 2024

Appreciated tweet from Virus Bulletin

[You’ll probably see advertisements inserted by WordPress into this article. I don’t choose them or approve them – in fact, I don’t normally see them – but they’re the price I pay for not being able to afford (at present) to pay for all my blogs.]

“IT security researcher and VB regular @DavidHarleyBlog has written and published “Facebook: Sins & Insensitivities“, a book providing an overview of Facebook-related security issues.” 

(The book includes a little content originally published in Virus Bulletin, reproduced and updated with kind permission from VB, which holds the copyright for that article.)

 

Posted by: David Harley | February 14, 2024

New book: ‘Facebook: Sins & Insensitivities’

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

I’m amused to see that Amazon has excised the word ‘Facebook’ from the ordering details of the latest book. I’m not sure whether that’s because of corporate mistrust of competitors, nervousness because it isn’t complimentary about Meta, or just that I’ve breached some unwritten rule of titling. But at least the title survives on the book cover.

Available as Kindle eBook and as paperback.

“Sadly, while it would be entertaining (for me, but maybe less for you) to write a more academic book tracing the historical aspects and trends in Facebookland, that will have to wait. Here, my primary aim is to provide an overview of Facebook-related issues that will be of more use to the everyday Facebook user than to academics and security mavens. However, the links to articles in the Appendix, covering issues such as the Cambridge Analytica shambles, may be useful to researchers wanting to go deeper into those issues that I haven’t covered in an in-depth article here. (Or even that I have covered, but not in depth!)”

 

Posted by: David Harley | February 10, 2024

Facebook fake videos

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

I have spent a not-very-happy time this morning, besieged by Facebook group posts passed off as porn videos and trying to get rid of them. In fact, it’s unlikely that they’re either porn or videos: they’re bot postings of malicious links that are probably intended to steal credentials. It’s not just fake porn that infests Facebook groups, by the way: there are all those fake ‘sad news’ links about celebrities alleged to be dead, ill or maimed, for instance, or scams based on fake ‘special offers’, or ‘bait and switch’ posts about lost/found dogs.

Obviously, this stealing of credentials exposes the legitimate account owner to losing control of their account, but that is usually just a stepping stone to other malicious activities that may range from scam distribution to ‘denial of service’ attacks, from ‘Londoning’ to distribution of political propaganda, from clickjacking to spurious advertising.

Facebook users: bots post all sorts of material to public groups. If it isn’t relevant to the community, it’s probably dangerous. Unfortunately, that doesn’t mean that material that is relevant is safe, but that’s a discussion for another time. I don’t, of course, advise you to follow links like those mentioned above – sadly, there will be other scam links that I haven’t seen or remembered… But do use the option to advise group admins: do it often enough and they may even be inspired to tighten up their group settings.

Facebook group admins: I can understand when people don’t want to make a group private, because that’s likely to hamper growth. However, you don’t have to let anyone (or anybot) post anything. Some of the facilities formerly only available in private groups have recently become available to public groups, too. In particular, turning on participant approval may add to your administrative workload, but it does make a big difference. (That’s what I do on groups I set up, but don’t feel able to enforce it on groups where I’m a co-admin but don’t feel that it’s ‘my’ group.

Don’t rely on Facebook to sort this out for you. Apart from the fact that the platform doesn’t always act in good faith, there are ways that scammers can avoid Meta’s checking. For instance, by showing Meta’s detection systems an innocuous page, where normal FB users see something quite different. (Other malware uses similar techniques to avoid probing by security companies and law enforcement agencies.) If Facebook tells you that a clearly offensive or malicious post doesn’t offend community standards, the likelihood is that its detection has been subverted by this or a similar deception.

[Addendum]

The day after originally posting this, I was encouraged to find that:

  1. If I report a fake pornographic video to Admin as being sexual exploitation (as indeed it is, since it exploits fake porn to capture credentials), it actually gets reported to Meta for review. It isn’t clear whether Meta’s review systems actually look at a post when it’s been deleted and the user (normally a bot/fake profile) removed. So now I’m ‘reporting to admin’ even on groups where I am an admin, before removing the offensive post.
  2. Facebook actually advised me that it was removing the post of a video that I’d previously reported from other posts. It seems that Meta’s Machine Learning is, in fact, sometimes capable of learning. Unfortunately, so are malicious algorithms, so this won’t necessarily last indefinitely, but after a weekend dominated by unattractive renditions of the human body – AI seems to have a curious idea of how perspective and human anatomy correspond – I’m happy for this tiny victory. And no, I’m not puritanical by nature, but this stuff is not only ugly but dangerous.

[2nd addendum]

Well, this is interesting, in a security-geek-obsessive sort of way. A fake video from one of the usual sources (bot using an autogenerated link address) that doesn’t use a blatantly pornographic image. (Subtly suggestive, maybe…) I’m not sure if this is an AI algorithm not quite getting what the scammers wanted, or green shoots of a new approach to scam hook images. I’m now collecting URLs, though I’m not sure what I’ll do with them, if anything. I know that there are security companies working with Facebook, so I guess they’re already aware of the autogenerated URL patterns. Email filters seem to be.
Meanwhile, it seems that the admins on a group I’m a member of but don’t administer are so fed up with spending their time on removing this garbage that they’re both quitting. Can’t say I blame them. (I have suggested that they turn on participant approval, and I guess we’ll see if they do.]
Posted by: David Harley | February 1, 2024

Facebook and Teen-Targeting Ads

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

[An extract from the forthcoming book ‘Facebook: Sins & Insensitivities’]

The Tech Transparency Project claims to be ‘Holding Big Tech Accountable’ and tracks issues with Facebook, X, Google, Apple, Amazon et al.

On 30th January 2024 it published a report – Meta Approves Harmful Teen Ads with Images from its Own AI Tool – about test ads using harmful images generated by Facebooks own AI image generator that clearly targeted the 13-17 age group but were approved almost immediately. I’ve mentioned this report elsewhere in the book, with reference to claims by Frances Haugen.

According to the report:

Meta approved ads promoting drug parties, alcoholic drinks, and eating disorders that used images generated by the company’s own AI tool and allowed them to be targeted at children as young as 13 … showing how the platform is failing to identify policy-violating ads produced by its own technology.

TTP noted that it cancelled the ads before they were due to be published, so they didn’t actually appear on Facebook.

https://www.techtransparencyproject.org/articles/meta-approves-harmful-teen-ads-with-images-from-its-own-ai-tool

Facecrooks points out that this came at a particularly embarrassing time for Meta when Zuckerberg, among other social media oligarchs, was defending social media implementation of claimed policies before Congress, with particular reference to mental health and young people.

https://facecrooks.com/Internet-Safety-Privacy/facebook-approves-pro-anorexia-and-drug-ads-made-with-its-own-ai-tool.html.

Posted by: David Harley | January 4, 2024

Still feeling a bit like a security researcher…

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

Thank you, Virus Bulletin, for linking on Twitter/X to my review of Frances Haugen’s book on exposing Facebook.

Rather nice to be described as if I were still a security researcher (well, I suppose I am a bit) and VB regular. (Sadly, I doubt if I’ll ever do another VB paper!)

Image showing VB's tweet

The (Face-)Book of Mammon [book review]

David Harley

Posted by: David Harley | December 27, 2023

The (Face-)Book of Mammon [book review]

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

I have, at best, an uneasy relationship with Facebook. To paraphrase something that I’m writing at the moment (more about that shortly):

I first subscribed to Facebook because I was working in IT security research and needed to find out more about it, so I signed up to see how it worked from a user’s point of view. However, friends and colleagues in the security industry – who may well have signed up for similar reasons – quickly found me there and invited me to befriend them, and why wouldn’t I? Then relatives and friends from outside the security industry also sent me invitations, and it would have been churlish to ignore them. Having been partially assimilated I found myself looking for people I knew, especially those I’d lost touch with and with whom I hoped to resume contact. Several years on, I followed various groups and pages aligned with my own interests and activities. So yes, I’m currently willing to accept the trade-off between the social advantages and Facebook’s unwelcome intrusions.

That doesn’t mean, of course, that I’ve resisted the urge to write about Facebook, its shortcomings, and those who take advantage of them: in fact, FB and other social media platforms have supplied me with much blogging material (and hypertension) over the years, to the point where I’ve recently felt obliged to upcycle some of that material into a book project. (If that sounds interesting, you can probably assume that if it’s ever completed, it will be announced on this blog at some point.) I’d already mentioned the whistleblower Frances Haugen in the first draft when I learned that she’d written about her experiences in a book originally called The Power of One: How I Found the Strength to Tell the Truth and Why I Blew the Whistle on Facebook (Little, Brown and Company: published in the UK in 2023 by Hodder and Stoughton as The Power of One – Blowing the Whistle on Facebook). So, naturally, I had to read it.

The first thing to say is that this book has no direct connection that I can see with the 1989 novel The Power of One by Bryce Courtenay, or the slightly later film adaptation. Frances Haugen is best known (and to many of us only known) for having disclosed the contents of 22,000 pages of internal Facebook documents to the Wall Street Journal:

https://www.wsj.com/articles/the-facebook-files-11631713039

Subsequently, she revealed her own identity in September 2021, ahead of an interview on 60 Minutes.

https://www.nbcnews.com/tech/social-media/facebook-whistleblower-reveals-identity-accuses-platform-betrayal-democracy-n1280668

Additionally, she has testified before or otherwise engaged with a number of bodies in the US, Europe and the UK. These included a sub-committee of the US Senate Commerce Committee, the Securities and Exchange Commission, the UK Parliament, and the European Parliament. I’m not always the biggest fan of Wikipedia as a source of accurate information, but there seem to be quite a few useful supporting links here:

https://en.wikipedia.org/wiki/Frances_Haugen

The next thing to say is that this is absolutely not a technical guide to defending your privacy and security from Facebook/Meta, its sponsors, or its abusers, though if you happen to believe that Facebook is an example of all being for the best in the best of all possible Metaverses, the doubts that reading this book might raise may well lead to your wanting to find ways to improve your safety on Facebook and in social media in general. Without commenting on the accuracy of individual claims, I think that’s a Good Thing. But if you aren’t already gifted with a reasonable amount of healthy scepticism, I suppose you probably won’t be reading the book, let alone my less-than-famous blog. As for accuracy: much of what Haugen says and what others have said about her makes a lot of sense to me, as a long-time Facebook watcher and commentator, but I haven’t ploughed through the Facebook Files myself and am not likely to. If I did, I wouldn’t have the resources to verify everything.

The third point to make is that while Haugen makes good points about the need for increased responsibility, transparency, and accountability in social media, this is not an exhaustive guide to ‘fixing’ Meta, let alone other platforms. Judging from her frequent interaction with governmental bodies, she is content to provide information from which they can draw conclusions to drive their future policies and legislation, not push a policy agenda of her own. As she herself writes:

‘Any plan to move forward that’s premised on me personally proposing the solution is a plan that’s doomed to fall short. The “problem” with social media is not a specific feature or a set of design choices. The larger problem is that Facebook is allowed to operate in the dark.’

Elsewhere, she writes about the European Union’s Digital Services Act that:

‘I like to think of laws like the DSA as nutrition labels. In the United States the government does not tell you what you can put in your mouth at dinnertime, but it does require that food producers provide you with accurate information about what you’re eating.’

In fact, the book is by no means focused entirely on the exposure of Facebook. While it begins with Haugen’s presence at President Biden’s first State of the Union address, earning an individual citation as ‘the Facebook whistleblower’, a very large proportion of the subsequent chapters trace the steps that led her to Facebook and beyond from ‘When I Was Young in Iowa’, through Junior High, the Franklin W. Olin College of Engineering and MIT, Google, Harvard Business School, Pinterest, and so on. We hear about her issues with coeliac disease, divorce, victimization by sexist fellow-students, and other negative issues. We don’t, perhaps, need to know about these issues in order to assess the importance of her assertions and allegations, but they’re clearly important to her, and to our understanding of what drives her. (And perhaps even in response to pushback from Facebook?)

What are those assertions and allegations? Well, in general terms, she evidently sees herself as having been ‘a voice from inside Facebook who could authoritatively connect the company’s pernicious algorithms and lies to its corporate culture … [without which] Facebook’s gaslighting and lies might still prevail.’

We’ve been told in recent years that she filed a large number of complaints against Facebook with the Securities and Exchange Commission (at least eight) ‘alleging that the company is hiding research about its shortcomings from investors and the public’, but I was unable to find a direct reference to those complaints in the book.

https://edition.cnn.com/2021/10/03/tech/facebook-whistleblower-60-minutes/index.html

In her statement to the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security, however, she claimed that Facebook’s products “harm children, stoke division and weaken our democracy” and prioritize profit rather than moral responsibility.

https://edition.cnn.com/business/live-news/facebook-senate-hearing-10-05-21/index.html

In the book she touches on a great many issues of concern, including:

  • The rise and fall of the Civic Integrity team ‘spun up’ in the wake of the 2016 US election, with its subsequent defanging and dispersal.
  • The Macedonian misinformation model (1. Build a ‘news’ site 2. Add political articles 3. Post links back from a Facebook page 4. ‘Watch the [Google] AdSense dollars roll in.’
  • Reluctance to reactivate ‘Break The Glass’ measures after the 2020 election, such as requiring a group with a score of hate speech strikes above a certain limit to apply moderation. Haugen clearly links the January 6th actions and ‘Stop The Steal’ to the absence of such ‘friction-adding’ measures.
  • Recognition of and inadequate handling of ‘Adversarial Harmful Movements’.
  • Refusal to share even basic data relating to inconvenient research.
  • Cambridge Analytica data capture as facilitated by Facebook. Cambridge Analytica doesn’t get a lot of wordage in the book, but Haugen does remind us that Facebook was fined $5 billion in 2019 for misleading the public on how much data could be accessed by developer APIs.
  • The effective caps on the number of fact-checking articles commissioned from Facebook’s partners and, crucially, paid for. (Later addressed by the BBC here: https://www.bbc.co.uk/news/technology-47779782).
  • The trade-off between ‘short-term concrete costs’ and the long-term hypothetical risks of an expensive fiasco like the Cambridge Analytica disaster.

These are issues that deserve and need wider exposure and discussion, and that’s why Haugen’s book is important, even though it’s not always well-written: after all, we don’t all have access to the detailed information given to governmental bodies.

Here’s a specific issue about the quality of the writing that caused me to grind my teeth quite a lot. There’s an inconsistency here in the way jargon is addressed. Early in the book, Haugen makes the occasional attempt to clarify coding/algorithmic concepts, even such basics as importing a library. (Though I have a certain amount of empathy with the story of how she was told she needed more instruction on modern software engineering: I went through a similar episode many years ago, when I was told by my manager that my (actually functional, but not necessarily elegant) C code was impenetrable…)

Unfortunately, however, she happily includes many examples of unexplained MBAspeak. Having spent some of the last few years of my working life providing consultancy services to North American companies, I’m not unfamiliar with some of the staples of business communications, and am fully prepared to reach out and circle the wagons in pursuit of an appropriate blue-skying box to think outside. (If Dilbert hadn’t already been invented, he would have had to exist.) Still, I’m (not very) grateful to have been introduced to some new ones (that is, I had to resort to a search engine to find out what they meant in the context in which they were used).

  • ‘Hockey sticking’ describes a fairly flat line on a graph that suddenly shows a dramatic upward turn like a hockey stick handle.
  • ‘Single-player mode’ is when someone on social media posts more than they read.
  • ‘Red ocean’ is a Harvard Business School concept – it describes similarly-qualified ‘sharks’ competing in a blood-filled ocean.

But my favourite is when she complains of having been put in ‘an awkward onboarding position.’ I have a feeling I’ll be borrowing that one.

To be fair, the photo below illustrates a semi-meaningless cliché that I didn’t see in Haugen’s book, but I’m sure you know the one, and might enjoy this take on it.

Perhaps it’s unfair to make such a big deal out of this authoring blemish, but it does make me wonder for whom exactly she’s writing. Not, perhaps, a wide audience, so much as other corporate techies, executives, politicians and other policy makers, influencers and, most of all, potential whistleblowers – at any rate, people who might be concerned enough at about the age of corporate-driven AI and the amoral algorithm, to do their best to apply brakes. And if she reaches such people, she deserves applause as much for that as for what she has told us about one specific and particularly problematic social media platform.

https://edition.cnn.com/2021/10/06/tech/facebook-frances-haugen-testimony/index.html

Posted by: David Harley | December 23, 2023

Group Therapy – security and privacy in Facebook groups

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

[This article, like a number of others on this blog, has been updated and expanded for a forthcoming book. There will be more news on that in an upcoming article.]

Having found myself roped into assisting as co-administrator a couple of Facebook groups with security/privacy issues, I thought I should, perhaps, share what little I know about defending your group against scam and spam posts and comments by tightening up group settings.

Caveat: I’ve never really wanted to spend a lot of time administering Facebook groups – in fact I’ve only created one myself that is still active, and I’ll tell you why later – and I haven’t made a lifetime study of the subject. Not even Facebook’s lifetime, let alone my own, which at present is many times longer than Facebook’s. It’s possible, therefore, that I’m not always accurate in my assumptions, and also that an assumption that was accurate when I wrote this was rendered false by changes made by Facebook the day after. But I’ll be as painstakingly accurate as I can. As usual.

Facebook tends to assume that your main ambition and purpose in life is to grow your group at all costs, and preferably devote several hours a day to that task. In fact, there are two main types of groups: private and public.

https://www.facebook.com/help/220336891328465/

Private Groups

A private group is one where only members of the group can see posted content and who are the admins. Furthermore, a private group can be hidden (secret) so that (hopefully) no one can see the group unless they’re already members, or are invited to join. This gives the administrator(s) something close to absolute control over who posts and what is posted, and is particularly appropriate for groups where sensitive information is exchanged. The more tightly controlled the group is, the harder it is for fake profiles to join.

That said, it’s a good idea to remember that Facebook sees everything (or can if it wants to), and is not always scrupulous when it comes to maintaining your privacy: even if/when that’s the company’s intention, it can make mistakes, and its policies and algorithms are generally opaque.

https://www.facebook.com/help/220336891328465/

The trade-off with a private group is that if you’re intending to grow your group, it’s harder for someone who might be interested and an appropriate potential member to happen across it and apply to join.

If you’re attracted by the privacy advantages of a private group and are considering making your public group private, bear in mind that once you’ve gone that route, you can’t revert it to a public group, because that constitutes a breach of the group members’ privacy.

https://www.facebook.com/help/286027304749263?helpref=faq_content

Formerly, this restriction only applied to groups with over 5,000 members, but now applies wholesale.

I don’t administer any private groups, so I shan’t risk any hostages to fortune by considering their privacy settings in detail. It’s worth noting, though, that while even Facebook’s own help pages sometimes contradict each other, it does seem as though there are other restrictions on large (5,000+) groups, such as how often and how many privacy changes can be made.

If this page – https://www.facebook.com/help/214260548594688/ – is still accurate, the settings you can change include enforcing membership approval by an admin or moderator for each subscription request. You can also require the requester to answer one or more questions and base your decision on whether or how the question(s) is or are answered.

Public Groups

Fortunately, since I was first pressganged into helping administer a group, some of the privacy settings formerly unique (as far as I know) to private groups are now available to public groups. While the enforced changes caused some confusion and consternation at first, they seem to me to be an improvement, on the whole. (Gosh, am I saying something positive about Facebook???) Since public groups are, by definition, easier to find, join and share than closed or secret groups, even the most open-by-intent group needs to think about its privacy settings if it’s to avoid some of the unpleasant spam/scam material that may be posted to a group if settings allow. Such material includes, but is certainly not limited to the following, more often than not posted from fake or cloned profiles:

  • Sympathy scams like the posts described here: https://chainmailcheck.wordpress.com/2023/05/13/abusing-communities/
  • Pornographic images, often masquerading as videos, that may attract group members to unhealthy links. These may be intended to trick you into giving away sensitive information, but they may also be intended to upload malware to your device.
  • Fake news about dead or disabled celebrities, again leading to dangerous links.
  • Posts about alleged offers by retailers such as supermarkets giving away coupons or even cash.
  • Recommendations for product links that are at best irrelevant, possibly malicious.

And much more, but I’m not making a special effort to track all these: the above examples are just items that have crossed my radar recently.

When I actually created a group – at any rate, one that is still active – it was in order to replace a page that was becoming increasingly frustrating to administer due to changes introduced by Meta that were overcomplicated, bug-ridden, and based on the assumption that I was running it as a commercial enterprise and constantly needed reminding to take actions that would increase my visibility and non-existent profits (usually by paying Meta for a service I didn’t want). Fortunately, I discovered that I could maintain some visibility (in fact, a public group is required to be visible, not secret) and still get most of the control I wanted. Sorry, but if you want more information on maintaining the security and privacy of Facebook pages, you’ll have to look elsewhere. (Life’s too short: well, mine is probably going to be, and there are other things I want to write about.)

Here’s a selection of the most relevant settings.

  • Participant Approval – if this is off, anyone on Facebook can post or comment, and group members can join chats. (One of the issues I’ve seen kill a group recently was fake profiles posting porn/scam links to chats linked to the group.) If it’s on, however, members and visitors must be approved to post or comment, and only (approved) members can participate in chats.
  • You can also allow both profiles and pages to contribute, or else just profiles. Since some scams are driven by pages masquerading as profiles (only an admin can post to a page, so it’s difficult to flag a scam actually posted on the page), there’s something to be said for not allowing pages. But profiles can, of course, be fake.
  • You can ask up to three questions and invite anyone requesting approval as a member or visitor to answer them: if they don’t answer or answer inappropriately, you can decline to approve them, if Participant Approval is on.
  • You can choose whether or not to allow anonymous posts and edits. My guess is that this will be more desirable in some groups than others: sometimes it’s fair to be reluctant to be identified, but sometimes that privilege can be abused.
  • You can require an administrator or moderator to approve all posts. Clearly, this could be a lot of work in a popular group, but allows control of obviously malicious posts.
  • You can set it so that potential spam posts and comments are held for your approval as an admin.
  • You can set it so that edits to posts must be approved: this helps to address cases where an approved post is edited maliciously by changing a link from something innocuous to something harmful.
  • You can set it so only admins and moderators create chats, or you can set it so that approved members can also create chats.
  • You can allow or disallow whether events, tag events, polls or GIFs can be posted.

NB: the more relaxed your settings, the more you’ll need to set your notifications so that you get to see everything incoming and remove as necessary. Irritating if you happen to have a life outside Facebook, but there it is.

Note also that you can also notify Facebook in many cases for them to run a review: however, if their algorithms are not up-to-scratch (impossible, do I hear you say?) you may find that the thing pops up again and you get a message telling you that the post or comment didn’t contravene their community standards. Sigh…

David Harley
Reluctant FB Group Administrator

Older Posts »

Categories