WEI’s Women in Energy Symposium and Celebrating American Business Women’s Day

Today is American Business Women’s Day. This day is about networking, celebrating broken barriers and looking ahead to further progression of working women. For me, this day is personal.

This day of recognition comes with WEI’s Women in Energy Symposium (WiES) just around the corner, Nov. 1-3. There, I will join a diverse mix of energy industry professionals in discussions around networking, recruitment and the state of women in the energy industry.

I will be speaking on the Sponsorship vs. Mentorship Panel with Nancy Arroyoavila of Pacific Gas & Electric, Kristin Stathis from Portland General Electric and Tim Swanson of FortisBC. I look forward to discussing the differences and similarities of these two aspects of career advancements. It is an important topic, yet it is somewhat surprising to me that we are still having this discussion.

I started engineering school 40 years ago, and we had the same conversations then that we are having now: How do we encourage more diversity in the industry? I wouldn’t have thought then that this would still be a challenge 40 years later. On one hand, the industry has changed dramatically and has seen so much improvement for women engineers. At the same time, there is much work left to do.

This is why mentoring and sponsoring is so vital.

For me, mentoring is about helping someone become their best self – helping them tap into their unique strengths – and giving those mentees resources. It’s a way to help other women hone their skills, see insights, and develop as people and leaders. The parallel to this – and just as important – is sponsoring. This is to be someone’s advocate, to champion their cause.

The purpose of today is “to bring together businesswomen of diverse occupations and to provide opportunities for them to help themselves and others grow personally and professionally through leadership; education, networking support and national recognition.”

I believe we have an opportunity to do just that. Women are now sprinkled throughout the male-dominated energy industry. Women in this industry have the opportunity to guide and inspire more female engineers. While some biases still do exist, there are ways to level the playing field. That’s where mentoring and sponsorships come in.

Hear more by tuning into the Sponsorship vs. Mentorship Panel on Nov. 2 and register for WiES here.

Calculating an Average Annual Growth Rate in MetrixND

After finishing my forecast, I like to present forecast average annual growth rates.  These results are calculated by converting the monthly forecast into annual values using a Transformation table with an annual periodicity.  The formulas used in the annual Transformation table are shown below.

In these equations, the “Res” variable sums the monthly residential sales forecast into annual sales using the “SumAcross” function.  The “Res_GrowthRate” variable calculates the year-to-year growth rates using the “PercentLag” function and the newly created “Res” variable.  The results shown below are annual sales and growth rates.

While year-to-year numbers are valuable, most forecasters like to communicate average annual growth rates.  In this case, what is the average annual growth rate for the forecast?  To calculate this, I added two new variables in the annual Transformation table, “SumRange” and “AvgAnnualGrowthRate” as shown below.

The “SumRange” variable isolates the forecast period, which, in this example, begins in 2015.  The variable uses the “IF-THEN-ELSE-ENDIF” function to remove any values which are not defined in the IF condition.  In this case, all the values prior to 2015 are removed as shown below.

The “AvgAnnualGrowthRate” variable takes the “SumRange” results and obtains the average to present the average annual growth in the forecast period.  The key component of this variable is the “GetDStat” function. This function obtains the variable average (mean) from the DStat tab.  The IF condition is used to place the annual average growth rate in the year 2020 row.

Typically, the average annual growth rates are calculated in a spreadsheet.  Forecasters copy and paste results into a spreadsheet where annual averages are easily calculated.  However, using the two variables described above allows for the dynamic calculation of results, removing the need to copy and paste into the spreadsheet.

The Latest Energy Trends

Our third seminar of the year reveals the results of our annual benchmark survey. Survey participants receive a copy of the report with the findings. This is everyone else’s chance to hear about the results -- so be sure to join us on Tuesday, September 12. To register, for this Brown Bag and other forecasting events, go to http://www.itron.com/forecastingworkshops.

Participation is free, but prior registration is required. Each seminar lasts approximately one hour, allowing 45 minutes for the presentation and 15 minutes for questions. Seminars start at noon Pacific time.  If you cannot attend a seminar or missed one, don’t worry! Your registration ensures that a link to the recording will be sent to you automatically approximately one week after the seminar date.

Breaking New Ground with Demand Response

Itron and Comverge, now Itron Distributed Energy Management, have a tradition of technological innovation. By joining forces, we have the opportunity to combine OpenWay Riva technology with distributed energy management expertise in demand response and energy efficiency solutions. How will these solutions work together?

Historically, vertically integrated utilities focused creating sufficient supply and delivery to meet market demand. Demand management is a solution for shifting energy consumption in ways that are beneficial to the electrical grid—typically this means using less electricity during hot summer afternoons when the grid is congested, or during winter mornings when there isn’t solar production. In the residential and small business markets, demand response solutions are usually implemented with communicating thermostats or load control switches that are sent signals to automatically reduce energy usage. These devices can communicate over paging networks, public cellar networks, end-customer WiFi networks or AMI networks. That’s where Itron comes in.

Smart metering and network communications allow utilities to put in place time-based rates that better reflect the cost of providing electricity. These rates, combined with distributed energy resources (DER), allow customers to easily and automatically reduce or shift energy usage or choose to purchase electricity at a higher price. Smart metering and DER both generate great datasets that can be combined to provide insight into end customer behavior, energy usage and options for shifting that usage for the greater good.

When it comes to Itron DEM’s IntelliSOURCE Enterprise distributed energy resource management system (DERMS), which automates and supports all of the processes necessary to implement a demand management program—we are looking at ways to integrate the OpenWay Riva and the IntelliSOURCE Enterprise DERMS. We are at the beginning stages of this, but some ideas include producing an application running on a meter that could tell a load control switch to delay a water heater while an electric vehicle is rapidly charging, or an application running on a transformer that could tell a group of thermostats and other DER’s to reduce usage when that transformer’s rated capacity is approached. We’re also looking at how we could leverage sub-second meter data and DER data in real time to better forecast household load. Ultimately, these innovations will allow demand management to be used more frequently by utilities, or even a more reliable network for communicating with thermostats and load control switches.

In the demand response management, a clear shift for Itron is toward Distributed Energy Resource (DER) management, to continue to leverage highly effective load control switches and thermostats, but also incorporate new technology like batteries, EV chargers or solar inverters. This is why we’ve named the internal business unit Distributed Energy Management.

There are multiple possibilities in how we can create compelling solutions based on our combined technology. We have the opportunity to continue to innovate and impact the industry—and that’s certainly an exciting place to be.

Using the Simulation Object Backward

I’m using the Simulation Object in MetrixND to weather normalize historic sales. When configured, the object allows me to simulate the model replacing the X variables with a scenario. In weather normalization, the simulated X variables are the normal weather heating and cooling variables.  My simulation is shown below.

When I select the “eyeglasses” and view the results, I notice a peculiar situation.  I’m missing results from 2010 through 2012.  I can see the missing values in the “Actual vs. Predicted” graph as shown below.

Why am I missing results from 2010 through 2012? Looking closely, I see my model is estimated from 2013 through 2015. The Simulation Object develops the simulated values using the Model Predicted values, so the Model cannot calculate a simulated value because it does not have Predicted values prior to 2013.

A simple, but incorrect, solution to the problem is to change the model estimation period to include 2010 through 2012 data. The problem with this solution is that the weather coefficients will change when these data are added.  I chose to estimate the model from 2013 forward because I did not want 2010 through 2012 data to influence the weather normalization process.

The correct solution is to change the model estimation period to include the 2010 through 2012 data AND mark these data as “bad.”

To mark the data as “bad,”I create a binary variable named “Remove” with the equation “Year <= 2013” as shown below.

This variable returns a value of “1” for all dates prior to 2013 and a value of “0” for all dates beginning in 2013.  Then, I insert this variable, “Remove” into the bad box in the regression form as shown below.

By marking 2010 to 2012 as bad, the model coefficients are not influenced by these data.  However, the model will still calculate a predicted value which is used in the Simulation Object.  The simulation object calculates the results beginning in 2010 which are seen in the “Actual vs. Predicted” graph.

With the full simulation results, I finish normalizing my historic sales.

2018 Important Meeting Dates – Mark Your Calendars!

The Annual Energy Forecasting/EFG Meeting and Training will be in Austin, Texas on April 24-27 in 2018!

The ISO/RTO/TSO Forecasting Summit will be in San Diego, California on May 15-17.

Don’t miss these opportunities to network with your peers and discuss real world issues and practical solutions. We look forward to seeing you. Registration details are coming soon.

Beyond the Hype of Smart Cities

Communities organizations, cities and companies must work every day to truly achieve the big picture of creating smart cities. Our goal of transforming existing infrastructure to create innovative, resourceful urban areas is a project that will span generations.

I spoke about this vision as a panelist on the Voice America Business Channel’s radio show on the future of smart cities. I joined Steve Sarnecki, the vice president of public sector at OSIsoft, and Lisa D’Alesandro, vice president for SAP’s regulated industries industry value advisory practice, for a special edition of Voice America’s, “Coffee Break with Game-Changers.”

The program by Voice America, the worldwide leader in live internet talk radio, provides a refreshing hour of business talk. This episode was a part of a special series featuring game-changing technologies.

To kick off the show, we were each asked to bring a quote and explain how it applies to the development of smart cities. I selected a quote from Nick Saban, head football coach at University of Alabama, who has led teams to multiple national titles: “Success doesn’t come from pie-in-the-sky thinking. It’s the result of consciously doing something each day that will add to your overall excellence.”

This quotation best embodies my approach to smart cities since the biggest key in achieving a smart city is having a plan and executing it. Smart city initiatives around the world need to be more than just hype, so I really like Nick's thinking about a plan and then chipping away at that each and every day.

I’m passionate about this big plan to create smart cities and I want to leave the world a better place for the next generation. So, let’s do something every day to chip away at the goal of someday waking up having built a city that will transform people’s lives.

Listen to a recording of the show to hear the full show.

It’s in the Water! – Improving Energy Savings Calculations from California Water Conservation

When I hear the term “water-energy nexus,” I imagine something complex and fantastical, when in actuality it is an intimately familiar and essential component to most of our lives. Water and energy are closely connected in a multitude of ways, from cooling the power plants that produce our energy to pumping water to crops and our kitchen taps. The water-energy nexus is a field of study devoted to understanding and innovating around the interconnected nature of energy and water in the modern world.

In a recent study at Itron, we take a close look at energy reductions associated with the provision of water to end users in California when statewide water use is reduced due to severe drought conditions. Producing and transporting potable water to end users in California alone accounts for 7 percent of the state’s total annual electricity consumption.  Even more surprising, this large figure doesn’t even include waste water treatment or energy inputs by end users such as water heating, which on average accounts for 20 percent of the average household’s energy usage. It makes sense then that consuming less water results in less water pumped, less water heated, and less energy consumed. So why then do most of us view water and energy as unrelated? It’s not a unique phenomenon. When we purchase food at the grocery store, clothes online, or water from our taps, we tend to think we are paying for the resource or commodity itself. We don’t typically consider that a portion of our payment is for the embedded energy required for extraction, processing, and delivery of materials associated with those goods and services, including water. We all learn that water flows freely downhill with gravity, but few of us appreciate that our water delivery system is a reversal of this process requiring significant energy inputs.

In 2015, Governor Jerry Brown mandated a 25 percent reduction in urban water use relative to 2013 levels in response to falling water table levels and dwindling snowpack after years of sustained drought and increased reliance by agriculture and urban areas on groundwater pumping. The 2015 sharp reduction in urban water use following the mandate provided a natural experiment for Itron’s study that focuses on estimating energy savings associated with urban water conservation.

The study relies on water agencies’ monthly electric bills associated with groundwater pumping, water treatment, and distribution between the baseline year of 2013 and the mandate year of 2015 to empirically derive an energy savings estimate coincident with the state’s water conservation mandate. Itron evaluated electric billing data across 32 water agencies that are broadly representative of the state’s diversity in water system sources. The study also serves as a cross-check on the pre-existing water-energy calculator developed by the California Public Utilities Commission (CPUC) for use by the state’s investor-owned utilities. The existing water-energy calculator relies on a set of static values at the granularity of the state’s 10 hydrologic regions as determined by the California State Water Board.

Results of the study show that the relationship between water and energy in California is highly variable and unique to each water agency. Across the 32 water agencies in the study, Itron finds the energy embedded in urban water production and distribution ranges from approximately 300 to 500 kWh per acre-foot, with a weighted average of 397 kWh per acre-foot in 2015. These values corroborate prior findings that the energy reduction potential from water conservation is substantial, however, preliminary findings do show that the CPUC water-energy calculator overestimates energy savings by 20-30 percent. In addition, the static approach of the CPUC water-energy calculator is found to mask significant year to year variability in the energy intensity of water production related to drought intensification and changes in water management policy, indicating that there is still a lot of research necessary if it is to be a viable planning tool.

When we learn something as foundational as that water flows downhill, we often take for granted its significance, when in fact water’s downhill passage has the power to shape mountains and carve canyons. The core concept of the water-energy nexus is simple but similarly very powerful. Ultimately, energy reductions from water conservation represent an underestimated energy saving resource that can be leveraged by any state to assist in meeting GHG emission and energy saving goals. We have the preliminary data now to re-envision water conservation as a combined energy saving opportunity. The integration of water-energy savings into portfolio level planning requires significant research to catch up to the rigor of energy programs savings analysis and will require the corroboration of all groups for the aggregation of applicable data. The water-energy nexus is ultimately part of a larger movement towards fully integrated systems that can be comprehensively managed and optimized by policymakers and implementers.

Computing a Weighted Average Wind Speed and Wind Direction Across Multiple Weather Stations

Often when we are developing weather variables to support day-ahead power and gas forecasting, we introduce wind speed as a standalone explanatory variable or as a contributing term in a wind chill formula.  An example of the latter is the following wind chill temperature formula:

Where temperature is in degrees Fahrenheit (oF) and wind speed is in miles per hour (mph).

The above formula is straightforward to compute when you have data from one weather station, but what is the correct wind speed to use in the wind chill formula when the wind speed data are derived from a weighted average of two or more weather stations? It turns out that a simple average of the wind speeds across the weather stations can yield misleading results.

Consider two weather stations: the first station is reporting wind blowing 20 mph with a wind direction of North East (45o) and the second station reports wind blowing 20 mph with a wind direction of North West (315o). A simple average of the two winds speeds is 20 mph. While it is tempting to use the 20 mph as the wind speed for the weighted average of the two stations, it would not accurately represent what is truly happening. Specifically, the 20 mph wind blowing North East would be working against the 20 mph wind blowing North West, leaving a weighted average wind speed of 14.14 mph.

We can extend this idea to wind direction. Let’s say we know that the load impact of storms flowing in from the North West have different signature from storms flowing in from the North East.  We would like to use this information by interacting the wind speed variable with a wind direction variable. In the above example, the weighted average wind direction would be 180o (computed as [45o + 315o]/2). In other words, the weighted average wind direction would be due South, which is wrong. In reality, the weighted average wind direction is due North (0o).

How did we derive the true weighted average wind speed and wind direction?  Below are the steps to take to compute a weighted average wind speed and wind direction.  The calculations will use the wind speed and wind direction data in the following table.

Step 1.  Convert Wind Direction in Degrees to Wind Direction in Radians. 

The formula for converting wind direction from Degrees to Radians is:

Where,  is the numerical constant Pi.

The results of the conversion from Degrees to Radians is shown in the following table.

Step 2. For each station, compute the East-West and North-South Vector

The East-West Vector is computed as:

The North-South Vector is computed as:

Here, the weather stations are indexed by (s) and the time interval is indexed by (i).  You would compute the East West and North South Vectors separately for each hour of wind speed and wind direction data.

The result of this step is presented in the following table.


Step 3. Compute a Weighted Sum of the East-West and North-South Vectors

The weighted sum of the East-West Vector is computed as:

The weighted sum of the North-South Vector is computed as:

Where,  is the user inputted weather station weight for weather station (s).  These calculations assume that the sum of the weather station weights equals 1.0.  If not, then the weights need to be normalized to 1.0 prior to this step.  In this example, each station is assigned a weight of 25%.

The result of this step is shown below.

Step 4.  Compute the Weighted Average Wind Speed

The weighted Average Wind Speed is then computed as:

For this example, the Weighted Average Wind Speed is 10.32.


Step 5.  Compute the Weighted Average Wind Direction

The computation of the weighted average wind direction is as follows.

a. First compute the Arctangent given the East-West and North-South Vectors from Step 3 above.

If the NorthSouthVectorWeightedSum is not equal to 0.0 then:

If the NorthSouthVectorWeightedSum = 0.0 then:

This alternative formula controls for possible errors associated with dividing by 0.0.  The result of this sub-step is -0.38.

b. Now a correction is made if the North-South Vector Weighted Sum is Negative:

The result of this sub-step is 2.77 Radians.

c. Next the Wind Direction in Radians is converted to Degrees:

The result is a weighted average Wind Direction of 158.44 Degrees.

Looking Backward to Move Forward

For most things in life, I look forward to go forward. I keep my eyes on the road ahead. I look before I leap. I aim at my target. But this summer while we were on our family vacation, we rented a row boat and it reminded me that rowing a boat is different. When rowing, I face backward to maximize my power going forward. Forecasting electricity is like rowing a boat, we need to look backward to go forward.

Intuitively, we already do this by using regression models. Regression models look backward, find relationships and project them forward. Although too often, we leave the looking to models instead of our own eyes.

Below is annual historical average use for residential customers over an 11-year period. Leaving the forecast to a regression model with HDD, CDD, and some binary variables, the outlook is close to 11,600 kWh/year (blue dashed line) and flat because the model does not include any trend variables.

But, the last three years show dramatic decline in usage and raises the question of whether the forecast should be closer to 11,000 kWh/year (yellow-dashed line).

Instead of leaving the forecast to the model, we should use our own eyes to identify the underlying pattern in the history. Is average use declining?

The only way to answer this question is to weather normalize historic average use removing the impact of weather variation. Weather normalization will answer the question of why 2016 was so low, why 2014 was so high, and what happened in 2012.

After weather normalizing historic average use, we can answer what happened in 2012, 2014, and 2016.

  • 2012 was a mild weather year
  • 2014 was an extreme weather year
  • 2016 was a mild weather year

Additionally, we can see that average usage has consistently declined from 2009 to 2016.

If we assume the trend stops in 2016, the forecast should be close to 11,400 kWh (purple solid line). If we assume the trend continues, the forecast will be close to 11,000 kWh/year in 2020 (purple dashed line).

Weather normalization is an essential part of the forecaster’s work process. The process allows us to look backwards to ensure that we move forward in the right direction.

Seasonal Sales? Not a Problem.

There is a client who has a reporting issue problem. Let’s call him Ray. His annual sales are to be split into summer and winter sales, but on a seasonal, not calendar basis. In Ray’s world, winter begins in November and ends in April spanning two calendar years. While Ray sounded calm when he called me, I could feel the tear of frustration welling up in his eyes. How can we summarize annual winter sales in MetrixND when the sales fall in two calendar years?

Let’s be more specific. The picture to the right shows the monthly sales. Ray wants to call November 1995 through April 1996 the Winter 1995 sales.

Typically, the MetrixND’s SumAcross function is used to convert monthly data to annual total. The process takes two steps. First, the monthly sales are split into calendar summer and winter sales in a monthly transformation table. Second, the annual sums are calculated using the SumAcross function in an annual transformation table. The steps, transformations, and results are show below.

But, this is not what Ray wants to do. To use MetrixND’s data transformation capabilities, Ray needs to move the January 1996 through April 1996 values into the January 1995 through April 1995 positions as show below. If Ray can do this, then the annual transformation technique works.

The good news is that Ray called and I have a solution.

Using the following transform, I can move January to April sales using the Lead function.

Once I move the data, I can use the SumAcross function, just like before, to summarize annual summer and winter sales leaving Ray very happy.

Remember the 2017 ISO Forecasting Summit

This year’s 11th Annual ISO/RTO/TSO Forecasting Summit was held in San Antonio, Texas from May 16-18. Just like the defenders of the Alamo banded together amid adversity many years ago, forecasters from CAISO, ISO New England, NYISO, PJM, MISO, ERCOT, SPP, AESO, IESO, Tennessee Valley Authority, and Bonneville Power Administration gathered to share insights into some of the most pervasive challenges facing today’s industry.

During the three days, a vast range of rich and thought-provoking topics were discussed. These topics included emerging challenges of solar penetration and trends, and complex modeling issues. These topics are further described below.

  • Emerging Challenges of Solar Penetration. The emerging challenges of solar penetration were highlighted as attendees presented on various approaches used to incorporate behind-the-meter solar generation into both short and long-term load forecasts. An interesting discussion of common practices for managing steadily expanding solar and wind resource markets got everyone engaged, and empirical research exhibiting increased load weather sensitivity caused by increased PV penetration gave everyone pause.
  • Emerging Trends. Companies discussed the potential impacts of time-of-use (TOU) rates, plug-in electric vehicles (PEV), and emerging trends. Some anecdotes about TOU rates offered perspective on incorporating prices into a forecast model.
  • Modeling Issues. Finally, we were all reminded of the importance of getting creative when it comes to building robust forecast models. Attendees demonstrated the value of identifying additional explanatory variables such as solar irradiance, and the challenges of moving from system level forecasts to point-of-delivery forecasts. The consequences of inaccurate weather forecasts and irregular weather events were also a high point.

It’s amazing what happens when you get some of the most brilliant minds in the industry in one room.  I’m already looking forward to next year.