Home

Feb. 6, 2018, 5 min read

Razer Chroma Shenanigans

I recently got a Razer Chroma keyboard, which has an API for its lighting effects.

There are multiple ways to use it, one being the C++ SDK which one can use to write e.g. a direct integration for a game that reflects its state on the hardware while playing. The keyboard software itself, named Razer Synapse, also runs a local HTTP server that exposes a RESTful API to talk to. This is language agnostic and therefore great to prototype ideas.

The razer website has a video of Richard Garriot showing an integration for on of his games where the healthbar of the player is reflected on the keyboard in realtime. I thought that would be a nice thing to have and gave it a try using World of Warcraft and my new keyboard.

So in World of Warcraft, while there are internal scripting interfaces for LUA scripts, the developers have taken care to make it pretty much impossible to output any realtime or near-realtime information from within the game using scripts.

All output is buffered and cannot be forced, so while logs of e.g. combat situations can be aggregated afterwards, we cannot react to them while it is happening, at least not through any offical interface. However we can of course run a script that takes screenshots of the display and analyze those images ourselves to derive meaningful data.

For example, the health- and manabars widget is always in the same size and position (assuming a fixed desktop resolution). That is good, it means we always know where to get it from! This is how it looks:

wow_bars

There are multiple problems here, namely opacity and gradients. The bars are not in a solid color, but are affected by multiple gradients to give it a rounded look. This gives us multiple shades of green and blue and will make it a bit harder to determine if a pixel actually has that color or not.

Even more problematic is the fact that the area that is not filled by those colors has a transparent background, letting the surroundings shine through. If that surrounding is also very bright or green or blue, that is a problem.

Gladly it is not too bad and we should be able to handle it. So using Python we can quickly write a script that takes a screenshot of the area of interest (just use any image editor to check the x and y coordinates to use for this).

We are interested in the full width of the bars (excluding maybe the borders where the gradients affect the colors the most). However we will just extract a single line of pixels from the middle for each, because the height of the bars does not carry any information for us.

wow_1pixel_bars
from PIL import ImageGrab

# Bounding boxes measured on a 1650x1080 screen.
healthbar_bbox = [114, 61, 252, 62]
manabar_bbox = [114, 74, 252, 75]

img = ImageGrab.grab(healthbar_bbox)
img.save("healthbar.png")

img = ImageGrab.grab(manabar_bbox)
img.save("manabar.png")

While figuring out a proper image transformation to be able to detect the percentage for each bar, ideally we make a few examples for "no health/mana", "some health/mana" and "no health/mana" to test against.

I played around with some threshold operations to get a proper black and white image. The result is not perfect, but good enough for my prototype.

I ended up using a simple point operation on the green channel for health and the blue channel for mana to cast everything to black and white, so we can build a percentage from it easily by just counting white pixels.

def get_percentage_from_image(what, img):
    channel = "B" if what == "mana" else "G"
    img = img.getchannel(channel)
    img = img.point(lambda px: 0 if px < 127 else 255)

    colors = img.getcolors()
    max_color_sum = img.width * 255
    actual_color_sum = sum([count * value for count, value in colors])
    percentage = actual_color_sum / (max_color_sum / 100.0)
    return percentage

Depending on the value of the bars and the background this may glitch a bit, but in general produced a relative robust result between 0 and 100%.

Using the Razer Chroma REST API I could simply submit HTTP requests against the local server that is running on my machine via Razer Synapse. The keyboard keys are mapped to a matrix of 6 x 22 color values (some bigger keys cover multiple of these), so I simply had to find out how many keys should be green for e.g. 33% health and submit that.

This is more or less the gist of it:

data = {
    "effect": "CHROMA_CUSTOM",
    "param": [
        make_row(keys_max, dimmed),
        make_row(num_health_keys, health_color),
        make_row(num_health_keys, health_color),
        make_row(num_mana_keys, mana_color),
        make_row(num_mana_keys, mana_color),
        make_row(keys_max, dimmed),
    ]
}
requests.put(uri + "/keyboard", json=data).json()

Done!

I would then run this twice a second to get a more or less realtime feedback and this is what it looks like in action (it is basically my character falling from a large height to take damage and then spending mana to heal himself again):

https://i.imgur.com/kZJDY2q.gif

Full script is here for anyone curious: https://gist.github.com/cb109/1d9996b0f7b4371a70a78dafc3ed4ce9

Please note that this is just a crappy prototype and nothing more. However, this has been a fun micro project over the course of two evenings - what have I learned from it?

Even if there is no scriptable interface, we can still use image capturing and transformation to extract useful information from external applications and hook that up to trigger a feedback device.