Do devs have an ethical responsibility to tell clients their idea is bad?
There’s a minefield of ethical conundrums in the software development industry today, and a lot of devs have been talking about it – which is only a good thing. I do think, however, that the industry has been framing this conversation incorrectly – we tend to focus on issues like user privacy, data security (or lack thereof), and automated feeds – which, to be fair, are very important aspects of the ethical dilemmas facing software development, and definitely need to be talked about.
But what if we were to take one step back, and try re-framing these ethical dilemmas? Rather than focus on the tools and features that create these ethical pot holes, what if we looked at why we’re using these tools in the first place?
Let’s quickly take a look, for example, at the ever-looming (and ever-growing) ethical dilemma of automated feeds in social media. As we’re all aware of by now, it was Facebook’s automated feed, and the algorithms that run it, that were responsible for creating echo chambers that led to the spread of a whole bunch of misinformation during the 2016 U.S. presidential campaign.
Troll factories would craft fake stories paired with sensational headlines that were guaranteed to capture attention – which were then noticed by Facebook’s algorithms – giving them higher priority in users’ feeds, and thus creating a vicious cycle. Bots masquerading as real people would comment on articles to give the impression that not only were these fake stories true, but they were worth talking about.
It’s a natural phenomenon of human instinct to place more credence on an object or idea if more people are interested in it. It was a virtually 100% automated process that was able to profoundly effect the way Americans (and other users in democratic countries as well) consumed media, and ultimately influenced their vote. This misinformation campaign effected everyone from all political leanings.
And, to be fair, this wasn’t limited to Facebook – every social media platform on the internet was affected – especially those with a degree of anonymity (I’m looking at you, reddit).
Regardless of political views, the idea of bots fabricating fake stories, and then amplifying those stories in order to influence real humans is a terrifying prospect. And we’re living it, right now.
But this isn’t what this blog is about. I want to look at why the idea of an automated feed came to fruition in the first place. To look at why the digital environment almost necessitated some of the tools software devs use that create these ethical dilemmas. Not the how, but the why.
I think, ultimately, issues like this arise when the idea behind an app or platform isn’t sound. Facebook isn’t at all what it was when it started. I remember being young enough to not be allowed to create a Facebook account because I wasn’t in college. Remember that? Can you imagine Facebook saying “no” to a user signing up for their services today?
It’s all based upon another human quirk – greed. Facebook’s shareholders got greedy, and figured out a way to keep users scrolling more: use algorithms to display interesting content. Users got greedy, and demanded more content to keep them scrolling. This wasn’t just a failure of Facebook’s ethics – we fueled it. We adopted it as normal.
I’m sure some devs over at Facebook found themselves asking the question; “Is this okay? Should we actually do this?”
But, I’m also sure that those devs with ethical concerns were either ousted or overruled; and Facebook, and its shareholders, reaped the rewards that come with automated content.
As a content creator, I relish the idea of automated content. It just makes sense. It’s more efficient. You can reach a much wider audience. As a human, I hate it. Where’s the authenticity? The realness? The human aspect of communication?
What I’m really trying to get at here is that ethically grey tools and processes seem to appear for one of three reasons:
- A platform or idea has run its course, and the user base has become disinterested, disillusioned, stagnant, or even shrinking in its size or engagement
- A platform or idea is, from its inception, shaky at best
- Lazy or un-informed code leads to security breaches or sub-standard UX that must be supplemented by data mining or another ethically grey tool
And this is where we, as an industry, must ask ourselves…
Do devs have an ethical responsibility to tell appreneurs their idea is bad?
For the rest of this blog, I’m going to focus on reason number two; this being that the underlying idea of an app won’t work. Reason number one is unavoidable in the digital age (trends do come and go, after all), and reason number three is a pretty easy fix; but when devs know that the idea for an app is bad, or will only be sustainable by implementing tools or processes that lack a certain ethical standard – that is something we can fix.
Now, there are a couple ways to define what constitutes as a “bad” idea for an app:
- The idea’s scope is too large
- The idea lacks a monetization component that doesn’t use data collection or another ethically grey tool
This is where that human component comes into play again – greed. It’s difficult for a business to say “no” to a client’s money. Financially, it’s detrimental. In a service based economy, it’s almost farcical. But when it comes down to it, we might have to.
Doctors, lawyers, notaries – all of these professions have some sort of philosophical and morally binding oath that they work within – doctors have a sworn duty to care for patients, lawyers must defend their client’s best interests, and notaries must make sure the party in question truly signed the document.
One of the most well-known tech giants in the world had a motto within a similar vein as the Hippocratic Oath for many years; Google’s famous “don’t be evil” mantra. While Google has recently removed this phrase from its code of conduct, I think it’s time the software and hardware engineers, designers, project managers, and CTOs that make up the tech sector take it upon ourselves to uphold new tech to this standard. “Don’t be evil,” is after all, not a high bar in my opinion.
“Don’t be evil” is also a little dramatic for some of the reasoning behind bad apps. So, for at least a little while, let’s take a step back from the realm of debating good and evil. Apps are very rarely made with malicious intent, and just because an idea is bad, doesn’t mean the people behind it are – but just as it’s a doctor’s duty to tell their patient cigarettes are bad for their health, so too should developers inform their clients about bad app practices.
When the scope of a client’s idea is too large
Scope creep is real, and it can spell the doom of many apps. Every dev shop has interacted with a client who has big dreams for their app, with plans to do it all. Businesses and entrepreneurs are still figuring out how to navigate ventures with apps, and tend to stick to the old school idea of providing the “one stop shop” for their customer’s needs. The people who use apps aren’t customers, however – they’re users. There’s a distinction for a reason. More often than not, when someone opens an app on their phone, they’re not going to spend money. Unless it’s an e-commerce app or a game that utilizes in-app purchases, once a user has either paid for the download or subscription, using the app in the moment is free.
Business owners are still in the mindset of “the more time customers spend in my store, the more money they spend as well,” but apps don’t work that way. Sure, the higher the apps’ retention, the higher its rank in the app store, but if an app doesn’t use in-app purchases, the extra revenue user retention brings in is ultimately due to higher user acquisition – the retention aspect isn’t the source of monetization, just as revenue from a marketing campaign doesn’t flow from the ads themselves, but from the sales that come along with higher brand recognition.
While scope creep can be disastrous for the business or appreneur themselves, this can actually be quite advantageous for dev shops – the larger the scope of the app, the more billable hours.
Just because an app is capable of doing more, however, doesn’t mean it will be successful. The average smartphone user uses 9 apps a day, and 30 apps a month – and devs know that users will always gravitate towards the app that solves a particular pain point better than another. Long gone is the day when business models like Sears could provide the one stop shop experience – before the internet, customers had to take time from their day to actually get to the services and goods provided. Now, app users can just open up another service with their phone. If they aren’t satisfied, they can delete a service and find another one – all with the same thumb.
Apps that try to do everything never do anything well.
It’s really no skin off a developer’s back to agree to build their client an app that suffers from scope creep – while it might not be too common of a practice, it’s totally possible for a dev shop to build in features that don’t help to solve the intended users’ pain point – and while this can lead to a larger payout for the developer, it can take away necessary funding from the appreneur’s marketing budget, or delay launch as the client finds more investment in order to continue development.
The idea lacks a sustainable form of monetization
This is the one that can lead to some heavy ethical dilemmas. There are a lot of helpful, well-made apps that use monetization tactics that aren’t with the user’s benefit in mind.
Take, for example, a study on user privacy from Symantec’s blog by Gillian Cleary; an Android app named “Brightest Flashlight LED – Super Bright Torch” with over 10 million installs accesses these permissions on users’ phones:
- Precise user location
- Access to users’ contacts
- Send SMS messages
- Permission to directly call phone numbers
- Permission to reroute outgoing calls
- Access to camera
- Record audio via microphone
- Read/write contents of USB storage
- Read phone status and identity
Now, as Cleary pointed out, some of these make sense, actually. Accessing call data can allow the app user to assign different LED flashes to different contacts, providing users with the option to create unique signals based on who was calling them. I fail to see the reason, however, why a flashlight would need to know a user’s precise location – unless the app was collecting data on users, and the company that collected that data was selling it to marketers.
Big data is a big industry now, and a lot of apps are cashing in. Foursquare (admittedly, in a pretty natural direction of growth) has become a “location marketing company,” and as this article from the New York Times shows, our phones, and the apps, on them, collect a lot of data on our habits, whereabouts, and almost every detail about our daily lives.
Data is important – but if an app isn’t able to sustain itself without selling off users’ data, maybe we just shouldn’t make it. There are enough apps out there, after all.
I’ve always found it interesting how users have been so readily participant with private companies collecting and selling their personal data (the amount of sales of a device that listens to every conversation, like Amazon’s Alexa, is frankly astounding) while simultaneously being recalcitrant towards the data collection projects run by governmental agencies like the NSA – but that is quickly changing.
Going back to our first example, Facebook – the social media powerhouse saw 26% of its app users uninstall the app from their phone following the Cambridge Analytica scandal. As all devs know, users are fickle, and are more likely to abandon an app than actually use it. If the only revenue stream an app has is personal data collection, and the user base finds out, it could spell disaster for the app.
If you’re looking for ideas on app monetization, check out our blog on the topic.
Leave a Reply
Want to join the discussion?Feel free to contribute!