Are Compact Fluorescent Lightbulbs Really Cheaper Over Time? - IEEE Spectrum

2022-10-08 18:55:32 By : Ms. Tracy Zhang

The October 2022 issue of IEEE Spectrum is here!

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

You buy a compact fluorescent lamp. The packaging says it will last for 6000 hours—about five years, if used for three hours a day. A year later, it burns out.

Last year, IEEE Spectrum reported that some Europeans opposed legislation to phase out incandescent lighting. Rather than replace their lights with compact fluorescents, consumers started hoarding traditional bulbs.

From the comments on that article, it seems that some IEEE Spectrum readers aren't completely sold on CFLs either. We received questions about why the lights don't always meet their long-lifetime claims, what can cause them to fail, and ultimately, how dead bulbs affect the advertised savings of switching from incandescent.

Tests of compact fluorescent lamps' lifetime vary among countries. The majority of CFLs sold in the United States adhere to the U.S. Department of Energy and Environmental Protection Agency's Energy Star approval program, according to the U.S. National Electrical Manufacturers Association. For these bulbs, IEEE Spectrum found some answers.

How is a compact fluorescent lamp's lifetime calculated in the first place?

"With any given lamp that rolls off a production line, whatever the technology, they're not all going to have the same exact lifetime," says Alex Baker, lighting program manager for the Energy Star program. In an initial test to determine an average lifetime, he says, manufacturers leave a large sample of lamps lit. The defined average "rated life" is the time it takes for half of the lamps to go out. Baker says that this average life definition is an old lighting industry standard that applies to incandescent and compact fluorescent lamps alike.

In reality, the odds may actually be somewhat greater than 50 percent that your 6000-hour-rated bulb will still be burning bright at 6000 hours. "Currently, qualified CFLs in the market may have longer lifetimes than manufacturers are claiming," says Jen Stutsman, of the Department of Energy's public affairs office. "More often than not, more than 50 percent of the lamps of a sample set are burning during the final hour of the manufacturer's chosen rated lifetime," she says, noting that manufacturers often opt to end lifetime evaluations prematurely, to save on testing costs.

Although manufacturers usually conduct this initial rated life test in-house, the Energy Star program requires other lifetime evaluations conducted by accredited third-party laboratories. Jeremy Snyder directed one of those testing facilities, the Program for the Evaluation and Analysis of Residential Lighting (PEARL) in Troy, N.Y., which evaluated Energy Star–qualified bulbs until late 2010, when the Energy Star program started conducting these tests itself. Snyder works at the Rensselaer Polytechnic Institute's Lighting Research Center, which conducts a variety of tests on lighting products, including CFLs and LEDs. Some Energy Star lifetime tests, he says, require 10 sample lamps for each product—five pointing toward the ceiling and five toward the floor. One "interim life test" entails leaving the lamps lit for 40 percent of their rated life. Three strikes, or burnt-out lamps, and the product risks losing its qualification.

Besides waiting for bulbs to burn out, testers also measure the light output of lamps over time, to ensure that the CFLs do not appreciably dim with use. Using a hollow "integrating sphere," which has a white interior to reflect light in all directions, Lighting Research Center staff can take precise measurements of a lamp's total light output in lumens. The Energy Star program requires that 10 tested lights maintain an average of 90 percent of their initial lumen output for 1000 hours of life, and 80 percent of their initial lumen output at 40 percent of their rated life.

Is there any way to accelerate these lifetime tests?

"There are techniques for accelerated testing of incandescent lamps, but there's no accepted accelerated testing for other types," says Michael L. Grather, the primary lighting performance engineer at Luminaire Testing Laboratory and Underwriters' Laboratories in Allentown, Penn For incandescent bulbs, one common method is to run more electric current through the filament than the lamp might experience in normal use. But Grather says a similar test for CFLs wouldn't give consumers an accurate prediction of the bulb's life: "You're not fairly indicating what's going to happen as a function of time. You're just stressing different components—the electronics but not the entire lamp."

Perhaps the closest such evaluation for CFLs is the Energy Star "rapid cycle test." For this evaluation, testers divide the total rated life of the lamp, measured in hours, by two and switch the compact fluorescent on for five minutes and off for five minutes that number of times. For example, a CFL with a 6000-hour rated life must undergo 3000 such rapid cycles. At least five out of a sample of six lamps must survive for the product to keep its Energy Star approval.

In real scenarios, what causes CFLs to fall short of their rated life?

As anyone who frequently replaces CFLs in closets or hallways has likely discovered, rapid cycling can prematurely kill a CFL. Repeatedly starting the lamp shortens its life, Snyder explains, because high voltage at start-up sends the lamp's mercury ions hurtling toward the starting electrode, which can destroy the electrode's coating over time. Snyder suggests consumers keep this in mind when deciding where to use a compact fluorescent. The Lighting Research Center has published a worksheet [PDF] for consumers to better understand how frequent switching reduces a lamp's lifetime. The sheet provides a series of multipliers so that consumers can better predict a bulb's longevity. The multipliers range from 1.5 (for bulbs left on for at least 12 hours) to 0.4 (for bulbs turned off after 15 minutes). Despite any lifetime reduction, Snyder says consumers should still turn off lights not needed for more than a few minutes.

Another CFL slayer is temperature. "Incandescents thrive on heat," Baker says. "The hotter they get, the more light you get out of them. But a CFL is very temperature sensitive." He notes that "recessed cans"—insulated lighting fixtures—prove a particularly nasty compact fluorescent death trap, especially when attached to dimmers, which can also shorten the electronic ballast's life. He says consumers often install CFLs meant for table or floor lamps inside these fixtures, instead of lamps specially designed for higher temperatures, as indicated on their packages. Among other things, these high temperatures can destroy the lamps' electrolytic capacitors—the main reason, he says, that CFLs fail when overheated.

How do shorter-than-expected lifetimes affect the payback equation?

Actually predicting the savings of switching from an incandescent must account for both the cost of the lamp and its energy savings over time. Although the initial price of a compact fluorescent (which can range [PDF] from US $0.50 in a multipack to over $9) is usually more than that of an incandescent (usually less than a U.S. dollar), a CFL can use a fraction of the energy an incandescent requires. Over its lifetime, the compact fluorescent should make up for its higher initial cost in savings—if it lives long enough. It should also offset the estimated 4 milligrams of mercury it contains. You might think of mercury vapor as the CFL's equivalent of an incandescent's filament. The electrodes in the CFL excite this vapor, which in turn radiates and excites the lamp's phosphor coating, giving off light. Given that coal-burning power plants also release mercury into the air, an amount that the Energy Star program estimates [PDF] at around 0.012 milligrams per kilowatt-hour, if the CFL can save enough energy it should offset this environmental cost, too.

Exactly how long a CFL must live to make up for its higher costs depends on the price of the lamp, the price of electric power, and how much energy the compact fluorescent requires to produce the same amount of light as its incandescent counterpart. Many manufacturers claim that consumers can take an incandescent wattage and divide it by four, and sometimes five, to find an equivalent CFL in terms of light output, says Russ Leslie, associate director at the Lighting Research Center. But he believes that's "a little bit too greedy." Instead, he recommends dividing by three. "You'll still save a lot of energy, but you're more likely to be happy with the light output," he says.

To estimate your particular savings, the Energy Star program has published a spreadsheet where you can enter the price you're paying for electricity, the average number of hours your household uses the lamp each day, the price you paid for the bulb, and its wattage. The sheet also includes the assumptions used to calculate the comparison between compact fluorescent and incandescent bulbs. Playing with the default assumptions given in the sheet, we reduced the CFL's lifetime by 60 percent to account for frequent switching, doubled the initial price to make up for dead bulbs, deleted the assumed labor costs for changing bulbs, and increased the CFL's wattage to give us a bit more light. The compact fluorescent won. We invite you to try the same, with your own lighting and energy costs, and let us know your results.

Heard of graph neural networks? Particle physicists have

Dan Garisto is a freelance science journalist who covers physics and other physical sciences. His work has appeared in Scientific American, Physics, Symmetry, Undark, and other outlets.

A view of the underground ALICE detector used to study heavy-ion physics at the Large Hadron Collider (LHC).

Particle physicists have long been early adopters—if not inventors—of tech from email to the internet. It’s not surprising, then, that as early as 1997, researchers were training computer models to tag particles in the messy jets created during collisions. Since then these models have chugged along, growing steadily more competent—though not to everyone’s delight.

“I felt very threatened by machine learning,” says Jesse Thaler, a theoretical particle physicist at the Massachusetts Institute of Technology. Initially, he says he felt like it jeopardized his human expertise classifying particle jets. But Thaler has since come to embrace it, applying machine learning (ML) to a variety of problems across particle physics. “Machine learning is a collaborator,” he says.

Over the past decade, in tandem with the broaderdeep learning revolution, particle physicists have trained algorithms to solve previously intractable problems and tackle completely new challenges.

Even with an efficient trigger, the LHC must store 600 petabytes over the next few years of data collection. So researchers are investigating strategies to compress the data.

For starters, particle physics data is very different than the typical data used in machine learning. Though convolutional neural nets (CNNs) have proven extremely effective at classifying images of everyday objects from trees to cats to food, they’re less suited for particle collisions. The problem, according to Javier Duarte, a particle physicist at the University of California, San Diego, is that collision data such as that from the Large Hadron Collider, isn’t naturally an image.

Flashy depictions of collisions at the LHC can misleadingly fill up the entire detector. In reality, only a few out of millions of inputs are registering a signal, like a white screen with a few black pixels. This sparsely populated data makes for a poor image, but it can work well in a different, newer framework—graph neural networks (GNNs).

Other challenges from particle physics require innovation. “We’re not just importing hammers to hit our nails,” says Daniel Whiteson, a particle physicist at the University of California, Irvine. “We have new weird kinds of nails that require the invention of new hammers.” One weird nail is the sheer amount of data produced at the LHC—about one petabyte per second. Of this enormous amount, only a small bit of high quality data is saved. To create a better trigger system, which saves as much good data as possible while getting rid of low quality data, researchers want to train a sharp-eyed algorithm to sort better than one that’s hard-coded.

But to be effective, such an algorithm would need to be incredibly speedy, executing in microseconds, Duarte says. To address these problems, particle physicists are pushing the limits of machine techniques like pruning and quantization, to make their algorithms even faster. Even with an efficient trigger, the LHC must store 600 petabytes over the next few years of data collection (equivalent to about 660,000 movies at 4K resolution or the data equivalent of 30 Libraries of Congress), so researchers are investigating strategies to compress the data.

“We’d like to have a machine learn to think more like a physicist, [but] we also just need to learn how to think a little bit more like a machine.” —Jesse Thaler, MIT

Machine learning is also allowing particle physicists to think about the data they use differently. Instead of focusing on a single event—say, a Higgs boson decaying to two photons—they are learning to consider the dozens of other events that happen during a collision. Although there’s no causal relationship between any two events, researchers like Thaler are now embracing a more holistic view of the data, not just the piecemeal point-of-view that comes from analyzing events interaction by interaction.

More dramatically, machine learning has also forced physicists to reassess basic concepts. “I was imprecise in my own thinking about what a symmetry was,” Thaler says. “Forcing myself to teach a computer what a symmetry was, helped me understand what a symmetry actually is.” Symmetries require a reference frame—in other words, is the image of a distorted sphere in a mirror actually symmetrical? There’s no way of knowing without knowing if the mirror itself is distorted.

These are still early days for machine learning in particle physics, and researchers are effectively treating the technique like a proverbial kitchen sink. “It may not be the right fit for every single problem in particle physics,” admits Duarte.

As some particle physicists delve deeper into ML, an uncomfortable question rears its head: Are they doing physics, or computer science? Stigma against coding—sometimes not considered to be “real physics”—already exists; similar concerns swirl around ML. One worry is that ML will obscure the physics, turning analysis into a black box of automated processes opaque to human understanding.

“Our goal is not to plug in the machine, the experiment to the network and have it publish our papers so we’re out of the loop,” Whiteson says. He and colleagues are working to have the algorithms provide feedback in language humans can understand—but algorithms may not be the only ones with responsibilities to communicate.

“On the one hand, we’d like to have a machine learn to think more like a physicist, [but] we also just need to learn how to think a little bit more like a machine,” Thaler says. “We need to learn to speak each other’s language.”

IEEE also mourns the loss of the designer of the Motorola’s MC68000 processor

Amanda Davis is a freelance writer and creative services manager at Measurabl, an ESG software and professional services provider based in San Diego.

Speech processing pioneer Life Fellow, 77; died 31 July

Furui was a leading speech processing researcher who played an important role in improving communication between humans and machines. Michael N. Geselowitz, senior director of the IEEE History Center, describes him as a “pillar in the speech processing community.”

Furui was best known for investigating human perception of transient sounds in the 1980s as a researcher at Nippon Telegraph and Telephone, in Tokyo. His findings led to a better understanding of human hearing and greatly improved the accuracy of speech recognition, speaker identification, and verification systems.

He began his career in 1970 as a researcher at NTT’s Musashino Electrical Communication Labs, also in Tokyo. From 1979 to 1982 he was a senior researcher at NTT Basic Research Labs and was promoted in 1982 to senior staff engineer of the company’s personnel and international affairs. Seven years later he was named director of the Speech and Acoustic Lab at NTT’s Human Informatics Labs. In 1991 he became a research fellow, and he was director of the NTT Furui Research Lab until 1997.

After joining the Tokyo Institute of Technology as a professor of computer science in 1997, he became dean of its graduate school of information science and engineering in 2007. He was named director of the university’s library in 2009 and became director of the Contents Utilization Center in 2011, when he was named professor emeritus.

He left Tokyo in 2013 to become president of the Toyota Technological Institute at Chicago and served in that position until 2019. He then was named chair of its board of trustees and held that position for three years.

He authored or coauthored more than 1,000 papers and books on speech recognition, artificial intelligence, and natural language processing. Twenty-six editions of his book Digital Speech Processing, Synthesis, and Recognition were published between 1985 and 2001.

Furui was a member of several IEEE committees and served as general co-chair of this year’s IEEE International Conference on Acoustics, Speech, and Signal Processing, held in May in Singapore.

From 2001 to 2005, he served as president of the International Speech Communication Association.

Among the awards he received was a 2016 Bunka Korosha (Person of Cultural Merit) Award, one of the highest honors bestowed by the Japanese government. He received a 2013 Okawa Prize for “pioneering contributions and leadership in the field of computer-based speech recognition and understanding.”

He won a 2012 Broadcast Cultural Award from NHK, the Japan Broadcasting Corp., for outstanding contributions to the theory and practice of automatic speech recognition technology, which is now used in NHK’s closed-captioning systems, as well as speaker recognition and multimedia search technology.

He received the 2010 IEEE James L. Flanagan Speech and Audio Processing Award for “contributions to and leadership in the field of speech and speaker recognition toward natural communication between humans and machines.”

Furui was a Fellow of the Acoustical Society of America and the IEICE.

He earned bachelor’s, master’s, and doctoral degrees in mathematical engineering and instrumentation physics from the University of Tokyo in 1968, 1970, and 1978, respectively.

Systems and software engineer IEEE Life Fellow, 87; died 20 August

Boehm was chief scientist, principal investigator, and chair of the research council at the Systems Engineering and Research Center at the University of Southern California, in Los Angeles. The SERC is an arm of the U.S. Department of Defense that leverages the research and expertise of faculty, staff, and student researchers from more than 20 collaborating universities.

Boehm began his career in 1955 as a computer programmer and systems analyst at General Dynamics, an aerospace manufacturer in Reston, Va. He left in 1959 to join the Rand Corp., a nonprofit in Santa Monica, Calif., that provides research and analysis to the U.S. military. He joined as an analyst and was promoted to head of the Information Sciences Department. He left in 1973 to serve as chief scientist of the defense systems group at TRW (now part of Northrop Grumman), an automotive and aerospace company, in Euclid, Ohio. From 1989 to 1992, he served as director of the Defense Advanced Research Projects Agency’s Information Science and Technology Office.

He left TRW in 1992 to become a professor of software engineering at USC, where he served as founding director of the Center for Systems and Software Engineering. Beginning in 2012, he was chief scientist, principal investigator, and chairman of the research council at the SERC. He was instrumental in the creation of SERC Talks, a webinar series featuring systems engineering experts.

In his 1981 book, Software Engineering Economics, he documented the constructive cost model, an estimation tool that has become a leading indicator of software changes.

He helped write the Systems Engineering Body of Knowledge, a continuously updated reference for the industry. He served as an assistant editor and contributed content on the core systems engineering approaches and approaches to systems life cycles.

He served on the boards of several scientific journals including the IEEE Transactions on Software Engineering, Computer, and IEEE Software.

Boehm was chair of the IEEE Computer Society’s technical committee on software engineering and the AIAAtechnical committee on computer systems. He also served on the Computer Society’s governing board.

He was Fellow of the Association for Computing Machinery, the AIAA, and the International Council on Systems Engineering. He was also a member of the U.S. National Academy of Engineering.

Boehm’s honors included the 2000 IEEE Harlan D. Mills Award.

He earned a bachelor’s degree from Harvard in 1957 and a master’s and Ph.D. from the University of California, Los Angeles, in 1961 and 1964—all in mathematics.

Computer engineer and entrepreneur Life Fellow, 75; died 26 July

Tredennick founded several startups and was instrumental in the development of the Motorola MC68000 and IBM Micro/370 microprocessors.

He received bachelor’s and master’s degrees in electrical engineering from Texas Tech University, in Lubbock, in 1968 and 1970.

In the early 1970s, he served as a second lieutenant in the U.S. Air Force and was a pilot in the C-130 squadron. He returned to school in 1972 but continued to serve as a pilot in the Air Force Reserve.

He earned a Ph.D. in electrical engineering in 1976 from the University of Texas at Austin.

In 1977 Tredennick joined Motorola in Chicago as a senior design engineer in the integrated circuits division. He worked on the logic design and microcode for the MC68000, which became the CPU for Apple’s Macintosh computer and other workstations.

Two years later he joined the research staff at IBM’s Watson Research Center, in Yorktown Heights, N.Y., where he designed the Micro/370.

He left IBM in 1987 and founded NexGen, a semiconductor company in Milpitas, Calif. NexGen was acquired in 1996 by Advanced Micro Devices.

Also in the mid-1980s, Tredennick transferred from the Air Force Reserve to the Navy Reserve, where he worked as an aerospace engineering duty officer.

He founded Tredennick Inc., based in Los Gatos, Calif., in 1989. The company consulted on microprocessors and programmable logic projects.

From 1993 to 1995 he was chief scientist at Altera (now part of Intel), a semiconductor manufacturing company in Los Gatos.

In 1997 he was promoted to captain and served as commanding officer of a Naval Air Systems Command unit.

He served as president of Jonetix Corp. from 2014 to 2019, when he became chief executive of the Los Gatos–based Internet security company. At the same time, he also advised Silicon Catalyst, an incubator and accelerator in Santa Clara, Calif., focused on semiconductor solutions.

Tredennick was a member of the Army Science Board, which advises the secretary of the Army and the chief of staff of science and technology. He served on the board twice, from 1994 to 2000 and again from 2006 to 2010.

For the past 22 years, he was editor and and a contributor for the Gilder Technology Report, a publication for investors about startups in the telecommunications, semiconductor, and computer industries. He wrote Microprocessor Logic Design, a widely used textbook, as well as numerous articles for professional and trade magazines. He also served as a contributing editor of Microprocessor Report, and he was a member of the editorial advisory boards of IEEE Spectrum, Embedded Developer’s Journal, and Microprocessors and Microsystems.

This year he was named an IEEE Computer Society Distinguished Visitor. He gave presentations on IoT security; optimal monitoring and conditioning for the electric grid; Internet security; and the future of Silicon Valley.

He was a member of the honor societies IEEE Eta Kappa Nu (IEEE-HKN), Sigma Xi, and Tau Beta Pi.

LED researcher IEEE Life Fellow, 94; died 16 May

Burrus conducted pioneering research on small-area high-radiance semiconductor LEDs. He was a research physicist at Bell Labs in Holmdel, N.J., from 1955 to 1996. He stayed on to join Lucent Bell Laboratories (now Nokia Bell Labs), a Bell Labs spinoff, in Murray Hill, N.J. He worked there until he retired in 2002.

He received a Richardson Medal from the Optical Society of America (now Optica) in 1982. He was recognized for the “development of ingenious laboratory techniques to fabricate microscopic devices such as millimeter-wave diodes, infrared semiconductor lasers, light-emitting diodes and detectors, and single-crystal fiber and film lasers.”

Burress was a Fellow of the American Association for the Advancement of Science, the American Physical Society, and Optica. He belonged to honor societies including Phi Beta Kappa, Sigma Pi Sigma, and Sigma Xi. He served in the U.S. Navy during World War II.

He earned three degrees in physics: a bachelor’s in 1950 from Davidson College, a master’s in 1951 from Emory University, and a Ph.D. in 1955 from Duke University.

Learn how power modules can reduce power supply size, EMI, design time, and solution cost

In this training series, we will discuss the high level of integration of DC/DC power modules and the significant implications that this has on power supply design.

Watch this free webinar now!

In addition to high power density and small solution size, modules can also simplify EMI mitigation and reduce power supply design time. And thanks to improved process and packaging technology, a power module may even provide all of these benefits with a lower overall solution cost as well.

In addition to these core topics, this training series also touches on some of the lesser-known aspects of power module design like inductor withstand voltage and high-temperature storage testing.