What's new

DIY LED Sensor Light Gun

winteriscoming

Champion
Joined
Feb 16, 2016
Messages
1,440
Reaction score
1,186
Location
Indiana, US
Now that I'm digging into the JVS control protocol, I'm trying to think about some additional applications beyond the driving setup I have and I'd love to get a decent light gun setup going at some point.

As I understand it, Sega light gun signals get transmitted to the JVS I/O as 2 analog channels for X and Y positions. In fact, I see that mounted guns are an option in the case of JVS shooters. I'd prefer a gun that's not mounted, though.

It would be great to just get a set of Sega light guns and set them up for my use, but that seems to be too cost prohibitive. I'll keep an eye out, though. Sometimes I see full cabinets going for cheaper than the parts.

Now that I've got analog figured out from a JVS perspective, any number of solutions seem possible from a lightgun standpoint. It would just be a matter of interpreting the gun's X and Y positions and reporting the necessary bytes to the analog channels. In this type of implementation, absolutely no analog outputs need to be generated. I'd be skipping that in favor of translating the gun's position as analog byte data that the JVS protocol expects.

So the question then really becomes how to handle this issue. A Wii remote provides probably the cheapest and most well documented route, but I question whether or not it would be usable at typical arcade cabinet distances. There's the AimTrak, which I bought into early on, but quickly became disappointed with the accuracy... Maybe something similar could be developed that uses more LEDs for tracking, similar to the Sega setup?

I once came upon a forum somewhere where some members were discussing a DIY alternative to the Sega setup, but I can't seem to find it now. As I recall @twistedsymphony was involved in that discussion and might be able to share.

At any rate, many of my project ideas have come to fruition on this forum, so it's worth a discussion.

I doubt I'm ready to pursue working on this any time soon, but wanted to get a discussion started.
 
I had some discussion on retrofitting Sega IR guns for use on a PC for emulation, as well as my speculation on how you might make them work with old CRT Boards. The two threads of note are here on AP: Shooting Gallery: Sega Type-II IR on a 108" front projection setup.
and here on BYOAC: http://forum.arcadecontrols.com/index.php/topic,149019.0.html


The way Sega has the gun board just report analog data for XY position seems brilliant to me because it means the games aren't locked down to any particular technology as long as the positional data is there. So mounted guns, or IR guns, or some other tech ... the game doesn't care.


One thing to note though is that in addition to the analog data there is also a "screen out" "button". this is a digital on/off signal produced by the gun board to tell the I/O board if the gun is in frame or out of frame. Interestingly, playing around with various analog games recently to help with Invzim's analog mapping spreadsheet, I had hooked an analog joystick up to the I/O board and the test screen from death crimson OX reported "screen out" on it's own whenever I moved out of calibration range, probing the digital pins I was unable to manually trigger it. This was the only gun game I tested so I'm not sure if it's designed to determine screen out on it's own without the "button" input, or if other games work like this too (though I seem to recall Maze of Kings registering the explicit screen out button when I was originally mapping out the gun board pins).


------------
I get the impression that you're interested in developing your own gun setup rather than sourcing an overpriced "OMZ" setup. You may want to check out Gun2NAOMI which uses the IR camera from a Wii Remote.


I know I've mentioned this before in other threads but I should note that the Wii remote itself is TERRIBLE for gun games because it only performs biangulation (that is positional calculation based on 2 points) as opposed to triangulation (positional calculation based on 3 points). you have to think gun games as tracking the 3D position, distance and angle of the gun NOT just calculating the 2D position of a pointer on the screen. with only 2 reference points you can only calculate 2 dimensionally and as a result you can't determine exactly where the gun is. With the Wii they're only interested in using the pointer like a mouse so this is fine, but for gun games where you actually want to fire where you're looking down the site, you need proper 3D tracking. If you spend some time fine tuning the calibration on the Wii Remote you can get it sighted in, but this is all based on the position of the remote being your 3rd point of reference, so as soon as that moves, the rest of the calibration falls apart and your calculations are all off again.

OMZ is the company that designed and manufactured the "Sega Type II IR" Gun board and sensors.
As for how the OMZ boards work, there is some discussion/speculation on my BYOAC thread. OMZ has some technical information on their website (which is of course in Japanese) but it seems that the IR sensor is similar to an IR motion sensor used by alarm systems. This can do some very basic detection of movement from one region to another within the field of vision of the sensor (almost like a 16 pixel or so camera). The IR emitters are chained together and seemed to be designed to illuminate one at a time in sequence. since the gun board knows the position of the currently illuminated IR emitter in relation to the screen it can read the sensor data for each LED and is able to calculate a really accurate 3D position for the gun.

Honestly the IR sensor in the OMZ setup is extremely clever but WAY outdated, it was first built in 1997... think about it Sega is still using ~20 year old sensor technology. It technically works but if you used a similar IR LED array with a high-speed IR camera (like what's in the Wii Remote) you could build an extremely precise gun device. The key is the software though, getting the timing right and appropriately filtering out bad reads to make the tracking seem smooth. Processing it all fast enough is important too because there's a lot of potential for lag here as well.

... anyway... those are my thoughts on the matter. :)
 
Last edited:
@twistedsymphony Thanks so much for posting that great information!

I read through it when you posted and then got distracted by my current project. I'm not ready to dig into this one, but I greatly appreciate you taking the time to post what you know.
 
@twistedsymphony, let's see if we can figure out the theory and math for this as it may ultimately improve a Gun2Naomi type of solution that some members are buying.

Assuming we go with a Wii sensor camera, I'm reading that it reports the position and size of the 4 brightest IR sources it sees.

What I'm not sure of is how you'd know which LED it was reporting, the coordinates of which you'd need to plug into a formula. Let's not worry about that at the moment, because I think there would be some solutions for that, like addressable LEDs where you only light one at a time and get a reading that way, knowing exactly which one it saw, or code to compare all points and figure out which one is which.

So using the LEDs we would map out the plane. 3 points could be 2 upper corners and the lower middle, or 4 points could be each of the corners. A larger set of points could be established if doing addressable LEDs, but let's keep it simple and assume a low count like 3 for the theory of operation here. We would have a plane established and then we would also know the coordinates of the entire plane from the perspective of the sensor.

Taking perspective and angle into consideration, we would need to map out where the sight is aiming.

I think all of this will come down to some key formulas that need to be established and I expect a lot of them are common geometry formulas and an Arduino could probably eat them for breakfast.

I'm fuzzy on my geometry... so I don't recall a lot of the common formulas.

A 3D modeling program might help here, but I'm doing some quick mock ups in a vector program. Yellow dots are the LEDs. Blue dot is the point at which the sight is aiming. Black dot is the IR camera. Ultimately the coordinates of the blue dot are what we need to figure out, given all other variables.



Example 1.png
Example 3.png
Example 2.png

Edit: Of course this method is contingent on all LEDs being within view of the camera.
 
What I'm not sure of is how you'd know which LED it was reporting
That's easy. Sega (or rather OMZ) solves this by illuminating one LED at a time. So you know which LED you're currently reading. Assuming the camera is quick enough to report 10+ LED readings per frame.

Honestly I think a solid setup would use the LED boards from the Sega Setup, a Wii Remote and then a custom board to interface the two.

The Wii remote has blue-tooth so if you leave it as is you could have a wireless gun which would be great for a home setup.

The OMZ LED boards are fairly inexpensive and they could be easily reproduced if need be, they're also designed to be individually addressable so you can specify which one you want illuminated when. That will give you good tracking even when you cant see all of them, you can base your calculation on the LEDs that you CAN see.

the biggest problem i think would be timing the data from the Wii Remote.

---------------------
As for the math with the Wii remote it'd be pretty simple and I'd imagine you can think of it like this:
wiigun.png


As long the LEDs are all spaced around the monitor at equal distances and we can "see" at minimum 2 on the top row and 1 on the bottom row (or 1 on the top row and 2 on the bottom row) then we have enough information to extrapolate the position of the rest of the screen and by definition the analog X and Y values.

In the image above the player would likely be standing too close to the screen and would probably lose tracking along the top and bottom edges (this is probably why Sega used 12 LEDs on the Lost world)
 
Since I understand that this could be fairly better than other projects, can you estimate costs and time?
 
I would not advise waiting for this project as I can't guarantee how long before it would come together. I have some other ongoing projects I want to complete, so I'm not actively working on this one yet, just coming up with ideas.
 
Since I understand that this could be fairly better than other projects, can you estimate costs and time?
it's an idea... you'll be waiting a few months before we determine if it's even viable.
 
For now, let's keep the concept simple with 3 points around the monitor, and we can expand out from there.

I believe this will be able to account for perspective.

In the attached drawing A, B and C represent LEDs, while D represents the calculated midpoint between A and B. E represents the calculated point at the lower left of the screen.

Sketch.jpg

So assuming the IR camera reports positional data relative to the camera's center point, let's call that x0,y0. Up and right are positve, while left and down are negative (all assumptions at this point to be adjusted according to how the camera actually reports data). All other points are sensed and mapped according to that. So in order for the gun to register a point on the screen:
A has to be a point at x<0, y>0.
B has to be a point at x>0, y>0.
C has to be a point at x midpoint between A and B, y<0.

If any of those are not true, then the gun is aimed off screen.

Now to figure out the aiming point relative to the newly established plane:

Assuming point of origin is the upper left corner and right is 0 to FF and down is 0 to FF:
Reported X = Distance between line AB and aiming point scaled to 0 to FF.
Reported Y = Distance between line AE and aiming point scaled to 0 to FF.

Then those reported values need to go through a DAC to output an appropriate analog voltage.

There would be a few geometric calculations thrown in to establish points on lines, but nothing too terribly complex here as far as I can figure.

Am I missing anything?
 
Last edited:
So doing a little digging I found a video with a Raspberry Pi interfacing directly with a Wii Camera with most of the data relevant for what we need:


according to the video it's a
PixArt 128x96 camera and outputs at 100Hz per reading up to 4 objects at a time (not sure if the 100Hz is a limitation of the camera or the API being used by the Pi, but I'd assume it's the camera for now).

The data is output as a set of XY coordinates for each of the tracked points 1024x1024 grid with 0x0 being top left and 1023x1023 being bottom right. Untracked points report as 1023x1023.

The calculation is going to be a bit more complicated than what you have above.

The distance between A and B gives us our scale not our position. so if we're standing closer to the screen those dots will be further apart, if we're standing further from the screen they'll be closer together, even if we're pointing at the same location on the screen.

What we need to do is calculate the top left corner of our screen. and then calculate where we're aiming in relation to that point. We shouldn't just place an LED on the top left corner. that'd make our calculation easier but it would likely cause us to lose the screen whenever we pointed it toward the right side.

In addition to that in our calculation we need to take into account the fact that the gun can be rotated slightly in relation to the screen no one is going to be holding the gun perfectly upright, and as long as they're not tilting the gun completely (>=90 deg) sideways it should work

Based on the 100Hz refresh it seems to me that tracking more than 4 points would be problematic in terms of speed. So I think tracking all 4 points would be idea for the best coverage but I agree lets work it out with 3 points first and add the 4th in later.

I started writing out a few potential formulas for calculating position, most of it is simple right triangle math; Pythagorean theorem and the like. I'll need to play around with this a bit more... it's doable but the key will be finding the most efficient calculation possible.

if we do all of our calculations assuming a 1024x1024 grid to match the output of the wii camera, that would help keep the math simple and we'd only need to divide the final coordinates by 4 to get our final answer.

Theoretically we could avoid having to use A/D converters as well if we were to role this into it's own JVS I/O board
 
you's 2 are making me very excited ;) lol
Same here.

I was trying to figure out a way to translate the X-Y calculations based on what the IR Test produces. I never thought to calculate point C since I only thought of this 2-dimensionally. Setting MIN/MAX for X and Y allows you to reach the ends of the play screen. I have trouble translating the 'reload' portion of the co-ordinates where off screen reload is needed.

Since this is just a discussion and nothing concrete in terms of prototypes, I'd like to add my notes (I will add more later).

SEGA JVS I/O translates XY coordinates using 2 pots with variable min/max values.

For testing purposes, I used a Wii DolphinBar to simulate the XY values from the Wiimote to PC (using C#).

It wasn't accurate but I was able to at least generate the XY values that a traditional pot would generate.

Again, still using a PC, I'm planning to use the above work to send over either RS232 or USB (not there yet)

My end game was to have an Arduino or other microcontroller translate XY signals from WiiMote to JVS I/O.

This basically is/was an attempt at completely replacing the Gun Sense Board for SEGA but I put this on hold because I don't have any original hardware so I can't reference improvements without knowing exactly how the original hardware works.

There was another discussion on here that mentioned that the WiiMote is not a feasible solution because of the 2point vs 3point problem so no need to bother with my partial attempt. Again this is hopefully an igniter to better things to come.

My REsources:
http://idav.ucdavis.edu/~okreylos/ResDev/Wiimote/
http://johnnylee.net/projects/wii/
https://wiimotelib.codeplex.com/
 
Last edited:
The distance between A and B gives us our scale not our position.
Reported X = Distance between line AB and aiming point scaled to 0 to FF.
I'm not saying the distance between A and B. I'm saying the distance from LINE AB at a right angle to our aiming point. At least, it would be a right angle if we were looking dead on.

Regarding perspective/rotation, I believe the theory I posted still holds up since everything would be calculated in relation to the points seen. The angle from one point to another shouldn't matter. The distance from one point to another only matters in so far as it will inform our scaling.

The data is output as a set of XY coordinates for each of the tracked points 1024x1024 grid with 0x0 being top left and 1023x1023 being bottom right. Untracked points report as 1023x1023.
Ok, so center would be x511.5/y511.5 and that would always be our aiming point relative to the other points?

if we do all of our calculations assuming a 1024x1024 grid to match the output of the wii camera, that would help keep the math simple and we'd only need to divide the final coordinates by 4 to get our final answer.
I think we're still going to have to do scaling to account for all possible angles/rotation.

Theoretically we could avoid having to use A/D converters as well if we were to role this into it's own JVS I/O board
This is a good idea, and something I'm comfortable with. :P


So here's how I see things with the new information:
Get coordinates for A, B, C.
Calculate coordinates for D as the midpoint between A and B.

Aiming point is always x511.5,y511.5 relative to the other points.

Get distance from A to B to establish X scaling:
Distance X = Square Root ((Bx-Ax)^2 + (By-Ay)^2))

Get distance between D and C to establish Y scaling:
Distance Y = Square Root ((Cx-Dx)^2 + (Cy-Dy)^2))

All of that should work regardless of angle of view relative to the screen. However, due to perspective, we aren't necessarily dealing with true right angles. It's true that no one would be holding their gun perfectly upright/level, it's also true that no one will have a spot-on perspective where the grid we're establishing would have all parallel horizontal lines lines in relation to the camera's view.

Technically 3 points are not enough to establish perspective. I believe you need at least 4.

But let's ignore that for the moment and assume that everything can be interpolated well enough assuming right angles and 3 points. It may be accurate enough without having to go overboard into crazy calculations.

Essentially, though, we've got the camera that has one grid, viewing the screen (skewed as it may be by perspective) that has another grid. In the end, I think we're kind of transposing the one grid onto the other.

Assuming right angles and spot on perspective, we can interpolate all other information like the aiming point in relation to the game screen's grid that has been established, but I question whether or not it would be accurate enough to work well. Maybe it would.
 
so this is what I've come up with calculations so far... there are two dimension in the middle of the calculation that I'm having a difficult time wrapping my head around but other than that here is some semi-psudo code as well as a example photo to understand what the calculations represent.

This is based on the assumption that there are 2 sensors along the top placed about 1/3rd of the way in from each side. these are LEDs a and b respectively
also for ease of calculation we assume that there is 1 sensor along the bottom placed directly below the top left sensor; about 1/3rd of the way in from the left side. this is LED c
this greatly simplifies the calculations as opposed to placing it in the center bottom, also it makes it easier to adjust these calculations if/when we add a 4th LED as we'll just need to re-arrange some variables instead of re-writing half the calculations.


//input points
ax //point a X pos
ay //point a Y pos
bx //point b X pos
by //point b Y pos
cx //point c X pos
cy //point c Y pos


calculated points
tlx //top left of screen X pos
tly //top left of screen Y pos
trx //top right of screen X pos
try //top right of screen Y pos
blx //bottom left of screen X pos
bly //bottom left of screen Y pos

wii_gun.jpg


//if we were placing the LEDs 1/5th of the way from either side the calculations would look like those below
//calculate top left and right points
hrise = by-ay
hrun = bx-ax
tlx = ax-(hrun/2)
trx = bx+(hrun/2)
if (hrise = 0){
tly = ay
try = ay
} else {
tly = ay-(hrise/2)
try = by+(hrise/2)
}


// by placing them a 3rd of the way in instead there's an equal distance from the left edge as there is between the two LEDs which greatly simplifies the calculation
//calculate top left and right points
hrise = by-ay
hrun = bx-ax
tlx = ax-hrun
trx = bx+hrun
tly = ay-hrise
try = by+hrise


//calculate the xskew angle
xscew = 90-arctan(hrise/hrun)


//calculate the x scale
if(hrise = 0){
xscale = hrun*3
} else {
xscale = sqrt(((hrise*3)^2) + ((hrun*3)^2))
}


//calculate the bottom left point
vrise = cy-ay
vrun = ax-cx
blx = tlx-vrun
bly = tly+vrise


//calculate the y scale
if(vrun = 0){
yscale = vrise
} else {
yscale = sqrt((vrise^2) + (vrun^2))
}


//calculate the yskew angle
xscew = 90-arctan(vrise/vrun)


scew = xscew+yscew


xdist = ?
ydist = ?


if(xdist<0 or ydist<0){
//off screen
} else {
finalxval = (xdist/xscale)*255
finalyval = (ydist/yscale)*255
}


So basically I need to figure out how to calculate xdist and ydist and this should work.

I know it's possible since my cad program can figure it out given the information we have, I just don't know how it's calculating it :D

it looks like finding the xdist and ydist requires some parallelogram forumals as opposed to simple right triangle forumulas so I'll need to to do some research. getting the angle and the diagonal of the parallelogram should be straight forward and I'd imagine there's a formula that can be re-worked to determine the side lengths from that.
 
Last edited:
GOT IT!... I was over thinking it, I just needed to use the law of sines.. here are the necessary calculations

we'll need to test this out for various scenarios, weird things can happen when 0s end up in denominators or you take a square of a - so we'll need to run a bunch of test cases and add some logic to avoid running into problematic edge cases. some of this can likely be reduced down a bit too. I was overly pedantic just so I could follow it... but this should work.

C:
//input vars
ax //point a X pos 0-1023
ay //point a Y pos 0-1023
bx //point b X pos 0-1023
by //point b Y pos 0-1023
cx //point c X pos 0-1023
cy //point c Y pos 0-1023

//output vars
analogx // the x position of the gun 0-255
analogy // the y position of the gun 0-255

//calculate top left point of the "screen"
hrise = by-ay
hrun = bx-ax
tlx = ax-hrun
tly = ay-hrise

//calculate the x scale
if(hrun <=0){
    //off screen
    analogx = 0
    analogy = 0
    exit
}

if(hrise == 0){
    xscale = hrun*3
} else {
    xscale = sqrt(((hrise*3)^2) + ((hrun*3)^2))
}

//calculate the xskew angle
xscew = 90-arctan(hrise/hrun)

//calculate the y scale
vrise = cy-ay
vrun = ax-cx
if(vrise <= 0){
    //off screen
    analogx = 0
    analogy = 0
    exit
}

if(vrun == 0){
    yscale = vrise
} else {
    yscale = sqrt((vrise^2) + (vrun^2))
}

//calculate the yskew angle
yscew = arctan(vrun/vrise)

//calculate shot location in relation to top left point
srun = 512-tlx
srise = 512-tly

//calculate length of diagonal from the top left point to the shot location
d = sqrt((srise^2) + (srun^2))

//calculate the 3 angles of the resultant triangle
d0 = arctan(srun/srise)
da = yscew+d0
db = xscew-d0
dc = 180-da-db

//use law of sines to calc xdist and y dist
// d/sin(dc) = ydist/sin(db) = xdist/sin(da)
ydist = (d*sin(db))/sin(dc)
xdist = (d*sin(da))/sin(dc)

if(xdist<0 or ydist<0 or xdist > xscale or ydist > yscale){
    //off screen
    analogx = 0
    analogy = 0
} else {
    analogx = (xdist/xscale)*256
    analogy = (ydist/yscale)*256
}
 
Last edited:
Thinking about this a bit more the easiest way to add a 4th led and allow it to work with any 3 visible would be to ignore the bottom right LED if all 4 are available, and if only 3 are available, calculate the location of the missing point then run though the rest of the calculations as normal that way if you're so close to the screen that you lose a point it will still work..


we also might need some logic before all of this to determine which point is which, I'm not sure how the Wii camera decides what order to output the points
 
So I woke up from a dead sleep this morning and realized that all of my calculations were for an orthographic projection of the screen as opposed to an isometric projection of the screen.

What does this mean? maybe nothing. calculating the isometric projection will be a bit more mathematically intensive and it might not even make a noticeable differences in the final tracking. the biggest differences would come if someone stood far off to the side, but chances are the gun would lose track of the LEDs before the tracking was noticeably off.

I have some ideas of how the isometric calculations might work. I'll work out the details later this week when I have some free time. I had to order a Wii Remote since I don't currently have one so maybe I can build a test-rig and see if there are any tangible differences in accuracy/efficiency.
 
That's what I was referring to above with perspective. Maybe I'm not using correct terms, but in art "perspective" is an indication that parallel horizontal lines on the same plane will converge at a vanishing point relative to the point of view and will not be parallel in the resulting image.

I was thinking about it, and given a constant shape of the screen, it may be possible to calculate perspective with only 3 points, but I think you'd ideally have 4 - 2 along the top and bottom to establish your perspective.
 
Yes, typical art "perspective" when considering a vanishing point is what you'd call an "isometric" view in Engineering.

you can still calculate it with 3 points, it's just way easier for our human brains to think about it using 4 points since you can mentally draw four lines to make a square...

the calculations are likely going to be more intensive for a true perspective but I'm not convinced that the difference in the end result is going to be measurable since we're talking a "low resolution" 256x256 coordinate output with the "viewing angle" only a few degrees off of perpendicular at most. I'd suspect the resultant difference between the calculation based on the orthographic projection and the isometric projection may not be more than 1 or 2 points off in the end coordinates and after jitter in the camera, rounding in the calculations, and noise in the analog output, and reduction due to calibration you may not see any differences at all. Ultimately the orthographic calculation above may end up being better anyway since it will be faster to calculate.
 
I agree. I think it may end up being good enough with the simpler calculations, and at least better than the current Wii remote implementation.

The great thing about this, though, is that you don't have to stay married to one method. You can get it working one way and experiment with others.

I'd say this is ripe for an Arduino-based solution, though if your end goal is to use a complete Wii remote, I'm not sure how to approach that. If you remove the camera and interface with it directly, it would be easier I would think.

I personally am not concerned about a tethered gun as it would be more arcade-like in my opinion, but if you're going for a different setup, wireless guns could be cool.

Also, something to consider: If you're worried about introducing a 2nd player gun and it taking too long for a single processor to manage both, it could pretty easily be split up over a few dedicated devices that communicate back to a central controller. Like maybe each gun has a dedicated Arduino that gives necessary info to a dedicated JVS I/O Arduino, maybe over TTL serial or some other communication protocol.

I'm kind of excited about this solution for emulation purposes, too. It would be pretty easy to utilize this solution for a PC and get a better setup than anything currently available. I've got analog working over HID standard for my MEGA JVS and translating the movement to mouse coordinates wouldn't be too much of a stretch.
 
Back
Top