The routine that overruns the channel registers, and drives the channels was moved to the main routine and done in software. For 8-bit resolution of the phase, it would need to run 255 times per each of the 120 half cycles a second. If done via an interrupt timer, it would interrupt execution 30600 a second! After realizing this, and knowing I had only 5 MHz to work with, it shifted my thinking. This routine WAS the main routine, and other routines were interrupting it. Not the other way around.

The number one concern of my code was to get a value, and to hold the dim level of a light based on that value. The hardware zero-cross interrupt, and the phase offset was required to time this, but once running, the dim routine could be run in software. A complete execution of the “main” dim code would be done once per zero cross, and would not restart until the next cross. As the zero cross, and phase offset would only run once per dim code execution, I wasn’t TOO concerned about timing my code to account for those interruptions, but the UART code I knew would be more chaotic.

As incoming UART transmissions would interrupt my main software PWM code randomly, I had to account for this, and keep timing on track. I originally had UART set up for 9600 Baud, and when translated to bytes (9600 / 8), would at max interrupt my code 1200 times a second. When taking into account that each half cycle was 1/120th of a second, this would only amount to 10 bytes per each run of my main routine.

I had the PIC using a 20 MHz crystal, but the PIC divides that clock by 4 internally, so, I only ended up with a 5 MHz running speed. My main routine had to execute 120 times during this space. By dividing the 5,000,000 cycles by the 120 half-cycles, I ended up with 41666.666~ cycles to play with. To be able to draw to 255 timed positions in the phase, I had to execute this code….255 times in this space. So 41666.666 / 255 equals…… 163.3987 clocks. Lets just say 163.  ;-)

As was mentioned previously, UART would only to interrupt the chip 1200 times a second, which amounts to once per 4166.666~ cycles. Thus, I would only need to worry about offsetting the delay caused by this once per the 163 cycles I had to work with. I ended up upping my baud speed to 19200 later to allow 20 bytes per half cycle, and as 11 bytes were required to change the dim level (as described in README,) and the dim level could be changed once per half cycle, I was able to achieve a dim refresh rate of 120hz.

To optimize everything, I used the friendly opcode sheet below as I coded:


I timed the routines accordingly, leaving only the dimming “TIMER_LOOP_ROUTINE,” and the routine for processing the UART bytes in software.

;MAIN Loop
call    TIMER_LOOP_ROUTINE; 80 Clocks (Typical) OR 96 Clocks upon last loop (Including Call)

call    UART_ROUTINE; Routine for RS-232,     82 Clocks (Including Call)

goto    START;                + 2 clocks (=164)

I timed the routines to run at consistent speeds. The timer loop runs at 80 clocks per draw, and an additional 16 clocks the last draw cycle. Those additional clocks are used to load new dim values to be used after the next zero-cross.

Delays are added throughout the routine to delay the code where necessary to ensure the routine always runs at a predictable speed. The same is done with the UART routine, but differently. The UART code takes 82 clocks with call, regardless if a byte was received or not.

In actuality, the UART routine only uses 49 clocks, but if a byte was received, the interrupt will consume 31 clocks. I factor this into the timing.

With this code in place, I began work on the client code, and whipped up a python script that would randomly send dim values to the box:



Next up, Music Sync, and Home Automation. :D