Skip to content

Instantly share code, notes, and snippets.

@Shog9
Created January 29, 2020 21:58
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Shog9/7d08c54f5220f51de3b868907320391e to your computer and use it in GitHub Desktop.
Save Shog9/7d08c54f5220f51de3b868907320391e to your computer and use it in GitHub Desktop.

Some notes on transparency in social systems

This is the PC sitting on my desk right now:

...as you can see, it is literally transparent - you can look right into it! The components and physical connections between them, laid bare - they even put stupid lights on many of the components now, anticipating this level of transparency. Must be pretty easy to see how it works, right?

Well, no, of course not. Most of the interesting stuff is still obscured, happening at an atomic level inside what are literal black boxes. The layer at which it is transparent is irrelevant to most of the problems I might have to solve. When discussing the potential benefits of, costs of, and need for transparency in a system, it is critical to first establish the layer being discussed - otherwise, you may very well end up with a gaudy display that serves no real purpose. This is as true in social systems as it is in physical ones such as my PC.

My former colleague Meg recently wrote a blog post about transparency in online communities. She presents a compelling argument for one of the benefits of transparency:

In an online community, you need to find ways to foster and incentivize ways to encourage different types of users to interact positively and contribute to your mission. I believe that transparency, especially by making shared goals and opportunities to empathize apparent in the product itself, can be an important way to achieve this.

There are other potential benefits as well, such as:

  • improved identification of problems within the system
  • establishing trust in the correct operation / "fairness" of the system
  • enabling continuous improvement by allowing individuals to see the effects of their work in the system (kaizen)

However, there are costs to transparency as well:

  • unneeded detail can become noise, potentially reducing sensitivity to important information (information overload)
  • a transparent system may be more vulnerable to malicious manipulation of its operation (ex: intimidation or vote-buying in an election without secret ballots)
  • the overhead of transmitting sufficient information itself (how many people read the thick, expensive packets sent by their 401K plan?)

As with most things then, transparency is a tradeoff between cost and benefit. And as I noted earlier, it is entirely possible to eliminate most of the benefits by choosing to make the wrong layer(s) of the system transparent!

Lesson: make sure you understand the benefits that you hope to achieve when you set out to create a transparent system, then pick the layer(s) accordingly.

System layers and the audience for transparency

Back in the early-mid 1980s, fuel-injected cars with electronic control systems (ECUs) started hitting the market in earnest, replacing the carbureted engines common in previous decades. Shade-tree mechanics everywhere quickly complained about the lack of transparency in these systems: while the operation of most carbureted engines were fairly easy to understand and diagnose, the first ECUs were not; they required rare and expensive proprietary diagnostic machines to analyze and tune.

But the potential advantages of the new systems were huge: they were more efficient, required less adjustment, and - in spite of being far more complex - had fewer parts that were likely to fail in normal operation! Granted, some of these advantages were more theory than reality in those early days, but eventually they would be realized... What was lacking was the correct layer of transparency.

  • To the ordinary owner / operator, the system just needed to work - that they couldn't see the signals flowing from sensor to ECU to fuel/ignition components wasn't a problem for them. In fact, they now had less distracting minutia to worry about than with previous vehicles! The reduction in transparency made their lives easier!
  • To the dealer-mechanic with factory-designed diagnostic computers, the system was already transparent
  • But to the 3rd-party / amateur mechanic, those early machines were infuriatingly opaque - a rough-running engine might be a bad sensor, a fouled injector, poor spark, even a corrupt ECU itself - but determining which was a tedious process of trial and error.

Some early vehicles attempted to address the problem by adding "flash codes" to the ECU: a specific problem would be indicated by flashing a light a predetermined number of times - say, once means the primary oxygen sensor is defective, twice means the same but for the secondary oxygen sensor. This was a good effort... But it missed the correct audience: as these early ECUs didn't store the codes, the person who would see them was the owner / operator - the person most likely to not know or care. If the problem then couldn't be reproduced in the shop (or under the shade tree), then the correct audience remained in the dark.

It took a few years, but eventually automakers got it right: ECUs were equipped with standardized interfaces for storing fault codes and other diagnostic data, and tooling was created to read these back. Now the system was once again transparent for the correct audience. But take note: this was still a different "layer" of transparency from what had existed on all previous vehicles: instead of directly exposing the internal operation of the ECU, the new tooling created an abstract model which allowed diagnostics to be done in a way that the previous generation could only have dreamed of! Even as the systems themselves continued to increase in complexity, this abstraction allowed new or amateur mechanics to mostly ignore this complexity, focusing instead on the abstraction.

Lesson: to identify the layer(s) in your system that must be made transparent, first identify the audience(s) who require transparency. If necessary, create new layers in your system to accommodate these audiences without inconveniencing others.

Layers in social systems

It's looking more and more like effective transparency is something that will require actual work and research and planning, not just good intentions and fluffy words. You'll need to clearly identify what you're looking to accomplish, and who you hope to benefit. But you're not afraid of getting your hands dirty, so this is no problem! But you're not out of the woods yet - the next pitfall to watch for is specific to social systems: see, they tend to involve people...

Remember those black boxes in my computer? The inscrutable ECU? Those exist in every system, and in social systems they tend to be made of meat. You can learn an awful lot about how humans respond to various inputs, how they tend to behave in various situations, how to interpret their seemingly-inscrutable output... But generally-speaking, you do not get to open them up and look inside, not even when they're acting really super weird. The people in your system will defy attempts at transparency, and you just need to accept that.

So, what to do? Well, we can learn from the ECU and build an abstraction! We just gotta make sure to nail down the goals and audience properly. Most social systems in need of transparency aim to do precisely this:

  • Jury trials create an abstraction whereby the inputs, procedures and outputs are known, and thus a verdict deviating too severely from the expected parameters can be identified without knowing the actual thought processes of the jurors (which itself is a black box composed of human black boxes, to reduce the opportunity for tampering). Even the process by which the jurors are selected can be exposed, further reducing the chances of unpredictable behavior.
  • Your town council is composed of representatives elected by the larger population of citizens, reducing the volume and thus variability of behaviors. Inputs, procedures, and outputs are carefully recorded for each meeting, allowing constituents to verify that the black boxes they elected are behaving within the expected parameters. Processes are even provided for removing black boxes determined to be defective!
  • Publicly-traded companies are subject to a dizzying array of operating and reporting requirements, to help ensure that investors can understand the decisions being made and, if need-be, replace the people, er, "black boxes" who are running their investments into the ground.

You can design your own system under these principles as well. But, be careful: because both you, your intended audience, and parts of the system itself are people... Often overlapping groups of people... You can rather easily mistake one for another. A system can very easily be transparent to the people in it, while being entirely opaque to those outside. And a system which is transparent to you, the designer, is not necessarily transparent to anyone else. So if you happen to be designing a system where you feel you're in the target audience and a key participant... It becomes very easy to believe you've achieved a useful layer of transparency when you've done nothing of the sort.

Lesson: separate yourself from both the people in your system, and the people using your system. Then separate your intended audience from the roles within the system itself. Then, and only then, proceed with your design.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment