Apologies for the inconsistent blogging schedule – I’ve been totally swamped with work (as usual), but some things take more time than expected as I keep thinking of new things to try. This post is a follow-up to my previous post where I built my first working single-stage unregulated square-wave power inverter from junk-box bits. Realising it’s not real science unless you can repeat it, I decided to try building another, but this time trying to go all-out with the bells and whistles I had mentioned before, using slightly different junk-box bits and actually trying out a downlight transformer, as a bit of a tribute to where I started from. I also take the time to do some efficiency measurements on the inverters to see how they fare and try to experimentally verify just how a randomly-chosen switch-mode power supply behaves under different line voltages and input waveforms.
Version 2 – Downlight Transformer with All the Extras!
Buoyed by the unexpected success of my crude inverter design, I decided I would try making another, but this time, I would have all the trimmings.
This time, I would stuff in an extra MOSFET for reverse polarity protection, another MOSFET that would allow for easy power on/off control (similar to the remote control terminal on some inverters), one less capacitor (but better distribution) and an additional diode for the Digispark so that the capacitor doesn’t drain back into the switching stage when power is removed. The switching stage would be made of slightly different MOSFETs I had left over from my tests – this time, slightly less efficient units, along with substituting the two BSS138 MOSFETs with my counterfeit IRF3205 MOSFETs which I would otherwise not have a good use for. It is overkill, but at least on the test boards, they could be integrated into the one stack.
Instead of using Dupont wire, this time I would just use ribbon cable wire and solder everything in directly. After all, I know it should work, right? This time, I also managed to avoid the hot glue mess altogether for a slightly neater look.
For additional energy-frugality, I decided to depopulate the LEDs from the Digispark entirely, but otherwise, everything is very similar to the other one, except for one key change …
… this time I would be using an actual 12V MR16 halogen downlight transformer.
Conceptually, this would be a good thing because it has a 12V secondary which should make for a better match than the toroid with its 16V secondary. This one has a 240V primary as well, which should mean plenty of voltage, and the 50VA rating is hardly anything to snuff at.
In reality, the laminated E-I construction and design for downlight application seems to have made it less than ideal.
The first thing I did was start tuning dead time and watching what happens when the unit is under no load. Here is where things started getting a little interesting. It seems that the voltage on the coil rapidly reverses as soon as drive is removed and even shoots above the rail voltage. I thought that by tuning the dead time a bit longer, I could take advantage of this natural “rebound” tendency and set for the gate to be driven just as the voltage on the coil was starting to collapse, resulting in that slightly mangled square wave in orange.
The last chart had the the current trace (in purple) from the Holdpeak HP-605C in DC mode which is low-pass filtered. In the AC mode, the current trace shows that even though the gate is on, the current flow is nearly nothing until just near the end of the cycle where it rises and peaks quite sharply.
Under load, things were slightly more in line with expectations – the output had a strange “stair-step” shape but at least the current waveform was less peaky. Just what is going on?
Understanding the Problem and Improving Efficiency?
As it turns out, this would become yet another practical lesson in transformers. The toroid is famous for inrush current issues, but the laminated E-I transformers are not. That is a big hint. In fact, this laminated E-I looks quite similar (superficially) to a magnetic ballast for a fluorescent tube.
These hints indicated to me that the design of this downlight transformer has a high winding inductance compared to the previous toroidal transformer. The high inductance resists current change, thus reducing inrush current on initial switch-on, but also means that it has quite a back-EMF “kick” when current flow is interrupted.
In our case, the back-EMF generated by the field collapsing comes both rapidly and strongly, creating a voltage even higher than the supply rails. Because it is higher than the supply rails, it is only natural that when connected in the reverse (for the next half-cycle), little current would flow into the transformer.
Because of this, my approach to having a long dead time is actually completely the wrong thing to do.
This is because the coil may have a higher voltage, which would mean that it would want to force current into the supply rails where it would be (partially) absorbed by the capacitors. Because the MOSFETs have a body diode, it would be using those if the gate is not enabled, resulting in stress on the body diode but also 1V (or so) dissipated across the body diode. This creates heat and reduces efficiency. As a result, shortening the dead time to “meet” the natural rebound of the coil seems to provide the best compromise.
Another concern was the fact that the secondary was 11.5V rated in reality. The reason for this is just to reduce the power of the lamp slightly for better battery life. Because it’s a downlight transformer for use with 12V lamps doesn’t mean it has to have a 12V output. Likewise, the fact it has a limited short circuit current and is always expected to run with an operational load may mean that design compromises were made in the transformer which may mean unloaded efficiency would be poor regardless.
One concern would be that if fed with higher voltages (e.g. a fully charged lead-acid at 14.4V) in an unloaded condition, the transformer core could saturate and that would result in high current flow and wasted energy as heat. But there is one variable I can play with that could help this – line frequency.
I came up with the idea as I’ve often heard the saying that transformers wound for 60Hz tend to saturate when running on 50Hz and may overheat. I figured the opposite should also be the case – transformers rated for 50Hz running at 60Hz should have greater margin from saturation. As most modern loads are not all that sensitive to line frequency (and can often operate on both 50/60Hz), I decided to change the inverter’s code to make a 60Hz waveform instead.
Originally, the long-dead-time resulted in output that looked like this …
… moving to 60Hz at a short dead time creates a nicer looking output waveform without the step caused by “cutting out” the body diode loss. Even though the current profile doesn’t look all that different, I can definitely tell the difference as the 50Hz long-dead-time version was idling at about 13W unloaded, but the 60Hz short-dead-time version is idling at about 6.5W unloaded. Not brilliant, but much better. It is still somewhat unfriendly to the power supply as the back-EMF being higher than the supply voltage causes a kind of “resonance” effect where current is drawn in short spikes towards the end of the cycle but is “pushed back” at the beginning of the next, like an L-C tank circuit, but because the capacitance I have is not enough to match the transformer’s L, the power supply has to take some of this current. A poor power factor in action?
The Magic Smoke Escapes!
An electrical engineer can easily be humbled by a whiff of the “magic smoke”. It usually happens just as we think we know what’s going on … I haven’t smelt it in a while but this project definitely put me back in my place!
The problem was at the reverse polarity protection MOSFET which didn’t seem to last long in-circuit. The first time it happened, it was just as the unit was being powered on as I switched the power supply channel on. The power supply immediately ran into the current limiter – the MOSFET had failed a dead short across all legs. Because I had the current limiter set low, I didn’t liberate any smoke, but I thought to myself this may just be a fluke.
After all, my “abuse” of a MOSFET for reverse polarity protection in this way is quite stressful. For one, its gate is very exposed, vulnerable to electrostatic discharge damage. I could put on a series resistor and back-to-back Zener diode arrangement to clamp the voltage, add a capacitor to reduce the chances of the voltage flying up, but I did not as I have a relatively limited “junk box” to pick from. So I made a guess that perhaps this MOSFET was just ESD damaged and slapped another one on there.
Replacing the MOSFET did the trick and for the next series of experiments, about 20 power cycles, everything was just fine. But it wasn’t just fine … the problem came back with a vengeance!
The second time, I was not so lucky. The magic smoke emanated as a steady stream, although to be fair, part of this was probably the burning (thin) gate drive trace, as again, this MOSFET went short across all three legs, trying to short the 12V line to ground. With the current limiter set to 10A, that thin trace and wire was trying to short out the supply, turning into a nice resistor in the process.
I decided it wasn’t worth the effort to fix this now and removed the reverse polarity MOSFET board from the stack. It had definitely saved me once during my earlier experiments when I had reverse-connected the unit by accident, but as it currently stands, it is clear the MOSFET wouldn’t last in this position without at least some changes to the circuit, requiring parts that aren’t in my junk box.
I suspect the reason for the troubles may be several things – perhaps I somehow violated dv/dt on the MOSFET during power-up, or more likely, the transient of the capacitors charging on start-up may have sent too much current through the body diode as the rail started to come up, destroying the MOSFET entirely. Or perhaps it might still be ESD, or a combination of the above. But I suppose I know now not to assume that this should “simply work” … because it did only for a short while.
Inverter Efficiency Shootout
This brings me to the point of efficiency. When I built the inverter with the toroid, the idle current was so low that I suspected it had excellent efficiency, but I did not test it. I suspected the single stage conversion with no regulation would be fairly efficient, in part because I had excellent low-loss MOSFETs to do the switching and the microcontroller power consumption was reasonable. But I should really prove it, right? Otherwise, I’d just be better off using my generic commercial (regulated) inverters. I also know that the downlight transformer inverter, despite having slightly worse MOSFETs, is no contender for an efficiency award in part because the transformer itself is quite lossy so idle power is astronomical. How would it compare with the others?
This is where I was glad that I had some SCPI automation capability, as this experiment involved using the Tektronix PA1000 Power Analyzer and the Rohde & Schwarz HMP4040 Power Supply. To this, I added the Yamabishi variable transformer I managed to grab at a recent ham-fest to allow me to vary the load on the inverter, which was a 150W halogen flood-light.
The connections look as in the above “back-of-the-envelope” sketch. Essentially, the HMP4040 is powering the inverter and logging the power consumption. The AC voltage and current are monitored by the PA1000 which computes power. The power consumed by the variac and load is measured as the output power and the ratio is used to compute efficiency. Unfortunately, as I don’t have an automated variac, I have to slowly hand-sweep the load up and down – I decided to sweep it across the full range or up to 80W (whichever comes first, as I’m mostly interested in relatively light-loads) and then back down to zero to accommodate for sample-timing induced errors. The points would be plotted scatter-wise to give a visual impression of the efficiency across the load conditions.
Inverters tested include the crude inverter based on a toroidal transformer, the crude inverter based on a downlight transformer (as above) in both 50Hz long-dead-time and 60Hz short-dead-time regimes, a generic 150W modified sine wave inverter, an Aldi 300W modified sine wave inverter from many years ago, the repaired generic 500W modified sine wave inverter and the HIP-300 pure sine wave inverter.
Not unsurprisingly, the home-built inverters had lower output voltage thus could not reach the full 80W intended test maximum. However, it is clear to see that the home-built toroidal transformer based inverter does have a lead over all tested inverters below 22W due to the low quiescent current draw. Its efficiency peaked at about 90% The efficiency degrades as load increases, as transformer losses are likely to increase and it approaches its rated loading.
By contrast, the commercially available inverters generally had efficiency improve with increasing load, as the overhead of quiescent current is shared across more loads, and because they are probably designed with larger loads in mind. Their efficiencies similarly peak in the 85-90% region at a load of 80W. Their efficiencies at lower loadings (e.g. 15W for a single light globe) vary significantly between about 50 to 75%.
The crude downlight inverter in this post fared poorly compared to the toroid but was still not as bad as the HIP-300 and Aldi 300W inverters at light loads (below about 25W) in the 60Hz short-dead-time configuration. That configuration reduces the quiescent current resulting in the uplift over the 50Hz long-dead-time configuration at the low-load end, but the higher duty cycle also means better voltage maintenance at higher loads resulting in the ability to serve a larger load (even though it is technically overloaded above 50W). So perhaps if the load is moderate, the downlight transformer isn’t completely a lost cause.
Another way of expressing the same information is in a power-loss graph which shows how much power is being “lost” between the input and output. Interestingly, the losses for the single-stage inverters go up significantly as load increases, curving upwards almost exponentially. The trends for the other inverters (which have much more headroom) are relatively flat, which is to be expected. At one stage during the test, it seems my downlight transformer inverter was burning up 50W – perhaps mostly in the transformer itself, which would (eventually) likely cause overheating and opening of the thermal fuse. Such a crude circuit as mine doesn’t protect against overheating or overloading, so be careful!
As the data is collected anyway, I thought it would also be good to plot the voltage regulation as a function of load. The toroid operates at a lower voltage primarily because of the dead-time optimisation. The downlight version has a different dead-time configuration – but even the short one sees poor voltage regulation overall, with significant voltage droop under load. Without any means to regulate the voltage with the single stage, that is a compromise that will have to be accepted, but can serve to reduce power consumed in some loads (e.g. incandescent bulbs) to keep everything happy. Of course, the two-stage commercial inverters had a relatively steady voltage with only minor droop by comparison.
Does Mains Waveform and Voltage Matter
(to a switch-mode power supply, within limits)?
This leads me to the next logical question (or two) which I’ve had for a while. If my inverter is efficient, but unregulated, and produces lower voltages than ordinary mains, does this matter to the load? Similarly, if my waveform is square, will this affect my load?
In general, such a question is practically impossible to answer (in the general case) as the variations of loads out there are immense. However, knowing that a vast majority of devices have “universal” switch-mode power supplies (100-240V, 50/60Hz) can help narrow this down somewhat. The answer to this question could be vital in understanding whether my “efficient” toroid may be a waste of time – if it is more efficient at generating AC but causes my load to operate less efficiently, I could easily lose all of my gains!
Conventional wisdom would dictate that the waveform would not be a major importance to a switch-mode power supply. On the whole, one of the first things they do is rectify incoming AC into DC to charge a main capacitor and then “re-chop” this into AC to feed their own transformers at whatever frequency they please. Any differences are likely to be slight, in part due to non-linear losses that may occur because of how the waveform varies over time.
The situation with voltage, however, is perhaps a little less clear-cut and could very much depend on the design and components used in the power supply itself. This is because changing the voltage is likely to increase some losses while decreasing others and the net effect is hard to predict.
For example, we know that in order to serve a load, a switch-mode supply would draw more current at a low voltage, or less current when given a higher voltage. Because of this flexibility in accommodating input voltages, the output remains steady. However, in that situation, at a lower voltage, the increased current flow necessary to service the output likely increases the ohmic losses that are experienced through switching elements (e.g. MOSFETs), transformer windings, inductors, board traces and wires. This could, in turn, generate more heat that would increase those losses even further. Higher voltages would, conversely, offer the potential to reduce these losses.
However, the opposite case is true for certain aspects, for example, the bootstrap circuit for a switch-mode controller. This provides the basic power necessary to run the integrated circuit and is often just a simple linear regulator or even a high-value resistor and shunt regulator. Such simple systems generally consume more power as the voltage increases and will also dissipate more heat, but are vital to the operation of the power supply.
To see how this would play out in reality using a single device as an example, I devised the following experiment:
This time, I would be measuring the efficiency of an Orico QSE-4U 4-port desktop USB charger. This is scripted similarly to the previous experiment, but now also involves the B&K Precision Model 8600 DC Electronic Load to place a load on the three identical USB ports on the charger (the quick-charge port was left unused). Power from the inverter would go through the variac first, to allow its voltage to be adjusted, before being measured by the PA1000 to determine power input. Power output was measured by the BK8600 as it stepped through currents from 0-8A in 5mA steps. The ratio was used to determine efficiency of the charger.
This arrangement was used with the generic 150W modified sine wave inverter and the HIP-300 300W pure sine wave inverter to provide the two waveforms. Test voltages were set at 60-120V in 10V steps, 140-180V in 20V steps, 220-240V in 10V steps and 260V (or as close as possible). As the square wave was sent into the variac and the load had terrible power factor, a very prominent buzzing was endured throughout the tests. It is also noted that the poor power factor creates a high crest-factor, so much so that the current measurement was done on the 20A shunt in the PA1000 to ensure accuracy. As a result, efficiency measurements for a load below 0.5W was not graphed due to measurement error related uncertainty.
The pure sine wave inverter had a lower output voltage, thus it was not possible to push up the output voltage through to 260V. Regardless, in both cases, it seems efficiencies were fairly similar (visually speaking), including the tendency to “ripple” along. Slight dips in the pure sine wave result are due to the inverter’s propensity to make step jumps in output (seemingly for thermal compensation).
Instead, I extracted the peak efficiency at each of the voltage step, and the average efficiency across all load steps and plotted those.
This clearly illustrates the negligible efficiency difference as a result of waveform – perhaps this is small enough to be experimental error, but it seems that pure sine wave is ever-so-slightly less efficient and I suspect this could be because of the crest factor causing greater losses in non-linear components even though the RMS value is the same.
It also illustrates that the QSE-4U charger seems to peak in efficiency around 110V and instead suffers slightly as voltage is increased. The effect of ohmic losses can be seen below about 90V when the unit is unable to deliver the test load of 40W (5V/8A) which could be applied, although technically the unit is only rated for 6.8A/34W output.
In this respect, it seems that perhaps having my crude inverters have a square wave output of a lower voltage is not likely to reduce switch-mode operating efficiency too much, as long as the voltage remains high enough to run the load. In this one case, it seemingly increased the efficiency to reduce the operating voltage, but it is impossible to draw a generalisation based on the results of testing one device alone.
A key part of science is repeating experiments and learning something new. I probably struck it lucky when I built the first model using a toroidal transformer, as it proved to be exceedingly efficient once tested. Building this second one using an laminated E-I core downlight transformer reminded me that not every transformer is as efficient and uncomplicated to drive – the seemingly higher winding inductance made for some interesting back-EMF effects. Regardless, it was possible to get it to work and it would be my first 60Hz inverter, although it is only of “average” efficiency at best, but still made out of junk parts. Having a magic smoke event was definitely humbling and while I didn’t work out the exact root cause, I did have some suspicions. Using up some old counterfeit MOSFETs was also a nice touch.
But perhaps the more interesting part is the fact that the project led me down the path of measuring inverter efficiency which is something I had long wanted to do but only recently acquired the equipment to do so. Similarly, investigating how different voltages and waveforms affect a switch-mode power supply is another thing I’ve been meaning to find out. A sample of one doesn’t give us much data to draw generalisations from, however, in the case of this randomly-selected power supply, running at lower voltages actually improved efficiency by a measurable amount, so the lower voltage outputs from my first crude inverter may actually result in a double benefit!