As part of my work I find myself thinking about computational design, randomness, and color. A lot.

With Components AI, we want to let you explore
generative space in one way or another, so we need to be able to compute *all*
possible values. Though, in practice, it can be difficult.

For a better user experience it makes sense to make randomly generated values feel less drastic and instead more gradual. We want randomness to be on a slider. We also want to apply common design knowledge to constrain outputs when desired.

So, in this post, we’ll constrain ourselves to random colors and explore color generation.

Here I seek to answer some questions that Adam Morse asked me:

**can we make color transitions less drastic?****how can randomness be configurable?**

Which resulted in a few other questions:

**how do random numbers and color relate?****can we create a color generation algorithm that has some sort of memory?****how do we verify there***is*color memory?**can we empower users to configure entropy during generation?**

Most of design, in one way or another, is grounded in numbers and math. Even a random color can be computed by calculating a large number and converting it to hex. In fact, for some generators in Components AI, this is exactly what we do.

Each character in a CSS hex string, like `#123abc`

,
represents a number, or digit, 0-16.

Since a CSS hex color can have up to six characters, we can compute the entire hex color space which has nearly 17 million colors:

*16 x 16 x 16 x 16 x 16 x 16 = 16,777,216*

We can use this number to create a function that will randomly generate a
number between *0* and *16,777,215* (at least as randomly as `Math.random`

can be) and then
convert it to hex:

```
function randomHexColor() {
return (
'#' +
('000000' + Math.floor(Math.random() * 16777215).toString(16)).slice(-6)
)
}
```

This algorithm for computing color ensures that all possible hex values can be output, but one might quickly notice that a lot of colors are all over the place and some aren’t aesthetically pleasing.

To see what we’re working with, we can render a series of generated colors and place them side by side.

Click the diagram to regenerate the colors.

Above I mentioned that the random hex color algorithm is only so
random as `Math.random`

. This
is because it turns out that *truly* random number generation is hard. JavaScript uses
a Pseudorandom Number Generator
(PRNG) which uses a seed to distribute results
across a number space (0-1 for `Math.random`

).

Hardware random number generators (HRNG) are closer to true random because they will tap into a physical process like thermal noise or even nuclear decay measurements. The physical world is where we can find true randomness for seeding purposes, all other algorithms are an approximation.

Randomness has implications for software systems because it’s used for cryptography and requires high entropy in order to be secure.

For randomly creating design values like color, a PRNG is plenty adequate.

When it comes to computation, entropy refers to the general randomness of what’s being computed. In some ways it’s the “noise” or “chaos” factor. The hex color generator from earlier has high entropy.

We can see this in action when we compute 1,000 random colors and place them together.

It’s safe to say that we see reasonable color variety. It’s also important to note that
**color groupings are to be expected**. We, as humans, tend to see *uniform distribution*
as more random than true random distributions.

Let’s take a look at two distributions, one is random and one is uniform:

Many of us, myself included, will feel that the uniform distribution on the right is more random. We see the clustering in the left distribution and are inclined to think that it’s not a proper representation of randomness.

However, the clustered distribution on the left is the random one.

You can click the random distribution and see new, randomly generated distributions. Nearly all will have some sort of clustering.

The uniform distribution on the right is a simply modulo operator that creates a point for all multiples of three that are odd:

```
for (let i = 0; i < steps; i++) {
const isBlack = i % 3 === 0 && i % 2 !== 0
// ...
}
```

We can lower entropy by adjusting our random hex color generator to only generate numbers
between *0* and *1,600*. This will result in substantially fewer total colors.

The colors are predominantly a saturated blue and black. The distribution of the colors will look more similar to the pure random distribution we saw before.

By increasing the number in our hex color generator, we adjust the entropy by increasing or decreasing the number of possible colors that can be output.

function randomHexColor (){ return '#' + ('000000' + Math.floor(Math.random()*1600).toString(16)).slice(-6) }

The higher the range, the more colors we see. The more colors we see, the more entropy in our color generator.

This brings up interesting implications with our algorithms in Components AI. If we recompute a truly random color every time it could be a bit too jarring. We’d step from brown to to purple to green. It might make you feel lost.

When we look at random colors in a series, there doesn’t seem to be any relationship between the colors (because there isn’t). When there is some sort of relationship, that’s happening by pure chance.

Well, random values have no correlation between them what so ever, but most natural patterns have some memory of the previous state.

As you cycle through colors generated one at a time, it will feel all over the place because each new color is truly random. This allows for a bit of serendipity when generating new colors, but there is no memory of the previous color state at all.

Click the color swatch below to generate a new one.

I began to wonder about ways to leverage the previous color in order to generate a new, somewhat related color, while still being able to explore all possible values in the hex color space.

We needed to “evolve” color generation gradually without getting stuck in the corner of a particular color.

Naturally, I gravitated to mixing and blending the previous color with a new color as the first approach.

The chroma JavaScript library has a `mix`

function which accepts two colors and a mix ratio. I combined the previous color,
with a random color, and then also generated a random mix ratio.

```
color = chroma.mix(color, randomHexColor(), Math.random())
```

The resulting colors aren’t bad for a quick first attempt. There does seem to be a gradual shift or evolution. But, the colors seem to be a lot of muddy greens, purples, browns, and grays.

I wondered if it had something to do with the mix ratio, so I decided to try a few fixed ratios out.

I put the ratio on a slider with *.01* steps to see how the ratio affected
the series of colors.

Unsurprisingly, the colors were no longer muddy when the mix ratio neared 1 since the mixing only slightly affected the newly generated color. Colors were rich, but, they lost their sense of gradual change.

Chroma also has a `blend`

function that has a few different algorithms
for performing the blend. So, we sampled a random algorithm and combined the previous
color with a new, random color.

```
chroma.blend(
color,
randomHexColor(),
sample(['overlay', 'multiply', 'darken', 'lighten'])
)
```

The resulting colors appear to be richer and less muddy. There does seem to be a gradual “evolution” of new colors based on the previous. However, we can also see that blending colors often likes to result in very dark browns and near black colors.

They sort of take over. Maybe a particular blend algorithm causes the dark colors?

In order to try out the different blend algorithms individually, I put them in a select input that dictated which algorithm would be used.

Interestingly, all blend algorithms eventually resulted in black, white, or some arbitrary color. The random sampling between algorithms ensured that there was more entropy and color generation didn’t get “stuck in a corner”.

Chroma has additional color manipulation functions that I decided to experiment with, particularly manipulating hue, saturation, and lightness that make up the HSL color format.

It has been seen that color differences are caused by 2 factors: by hue and by light, and in most cases by both at the same time.

— Interaction of Color by Josef Albers

So, I created a handful of hue adjustments to randomly sample from, and then used the built in functions for saturation and lightness adjustment. The color manipulation function was selected at random.

```
const adjustments = [
'*1.5',
'*1.25',
'*1.1',
'*.75',
'*.1',
'/1.5',
'/1.25',
'/1.1',
'/.75',
'/.1'
]
const fns = [
() => chroma(color).set('hsl.h', sample(adjustments)),
() => chroma(color).saturate(),
() => chroma(color).desaturate(),
() => chroma(color).darken(),
() => chroma(color).brighten()
]
const fn = sample(fns)
color = fn()
```

The results were not very good, unfortunately. There’s some gradual shifting through color, but, we’re stuck in dark browns and grays for the majority of the series.

Similarly to the other approaches above, there appears to be a gradual shifting through color, but it often gets stuck for long periods in dark browns, white, and gray shades.

To see how the’s HSL algorithms worked, I decided to allow their probability of selection to be configurable.

When I initially built this interactive component, it made me realize that hue manipulation seemed to bring in the grays and whites which was surprising to me.

Each of the above algorithms affected the gradual change in color generation in different ways. What happens if we randomly sample each of the algorithms to create a single color generator with memory?

The results aren’t too bad.

Though, what’s the average color distance in comparison to pure random?

This seems pretty compelling because the average color distance is substantially less than pure random. When comparing the color schemes, too, pure random has much more entropy.

Though, 50 colors is a relatively small data set. What happens with 1,500 colors being generated?

Now that we’ve addressed most of the questions we set out to answer, we still need to make the entropy configurable.

Sometimes users might want to see pure random, sometimes they might something more gradual. Sometimes they might want something in between.

Above, entropy is placed on a slider. It’s reflected as a percentage that’s used to determine whether the pure random color generator function is called or if our memory-based generator is called.

At its most basic, the code looks something like:

```
const [color, setColor] = useState(randomHexColor())
const [entropy, setEntropy] = useState(50)
const randomBool = (percentile) => Math.random() >= percentile
const regenColor = () => {
const newColor = randomBool(entropy / 100)
? regenColorWithMemory(color)
: randomHexColor()
setColor(newColor)
}
```

With that, the user can configure the probability of the generator function. Pretty neat!

Over time this memory-based color generator will surely evolve, but I think it’s ready to start pushing to production for some generators on Components AI.

For next steps, this concept can be used as one of the many generators for more complex components as well. Consider the button, generating colors that are of high entropy.

I also want to explore multiple color generation and handling relationship between colors in a palette.

Thanks for reading <3.

Thanks to Adam Morse on asking the tough questions that inspired me to try and experiment with a few solutions.

Thanks to Chris Biscardi, Brent Jackson, Cole Bemis, Taylor Bell, Florian Kissling, Prince Wilson, Rose Wiegley, Nick Bender, and my wife, Elsa, for reading drafts of this post and providing feedback.