Data Validation for Hosting Capacity Analyses (Text Version)

This is the text version of the webinar video Data Validation for Hosting Capacity Analyses.

Jess Townsend: Hi, everyone! I'm sure we're all still getting settled, but thank you for being here today. Welcome to the Data Validation for Hosting Capacity Analyses webinar hosted by the National Renewable Energy Lab (NREL) and the Interstate Renewable Energy Council (IREC). I'm Jess Townsend with the NREL Communications Department, and I'm just going to get us started with a couple of housekeeping items before turning the event over to our panelists today. As you settle in, please drop your name, the organization you represent, and how you interact with hosting capacity analysis in the chat so we can all get to know each other. I'm going to go ahead and drop this prompt in the chat as well for you. I'm going to keep us keep us moving since we only have an hour together today.

As we get started with those introductions, I'm going to jump right into a few housekeeping items. If you experience any technical issues, feel free to message me as the host using the chat panel. We also encourage you to interact with our panelists today and with each other through that chat box. You can submit questions via the Q&A panel as well. We will be answering as many of those questions as possible in the last 15 minutes of our time together today.

We will be asking for feedback throughout the presentation via Poll Everywhere, as well. To get us primed for that real-time feedback, I'm going to ask you all to log into Poll Everywhere now. Keep it up, either in another window on your computer or on your phone (whatever you prefer), so you can answer questions as they arise. I'm going to drop this link into the chat for everyone to access Poll Everywhere now. We hope you'll share names. As you log in, that will be the first prompt to receive, but, if you prefer anonymity, that is completely okay too. No names will show up in the results, so you don't have to worry about your comments being attributed to you in real time. It's all private. And again, if anyone has any trouble accessing Poll Everywhere or prefers not to use it, feel free to drop your responses into the chat box as well. Alright, now that I've speedily gone through housekeeping items, let's begin.

Today we're going to be reviewing the high-level findings from a recently released report, which identifies a suite of best practices for producing trusted, validated hosting capacity analysis results reflecting real-world grid conditions. We'll begin with introductions to our panelists, followed by a short background on the report and its methodology, before reviewing the categories of best practices. They include business processes, quality control during the feeder model development process, validation of results before publication, followed by customer feedback and regulatory oversight. And then, again, as I mentioned, we will end with a Q&A session with our panelists to get all your questions answered. One quick note though, we will be distributing a follow-up questionnaire for additional thoughts and feedback at the end of the webinar as well. If there's anything that we missed out on that you want to include there, please do.

Alright, and with that I will introduce Michele Boyd, our program manager on this effort, with the U.S. Department of Energy's Solar Energy Technologies Office (SETO). Michele, thank you so much for joining us today and take it away.

Michele Boyd: Thank you so very much, can you hear me, okay? Great wonderful. So, I wanted just to start by reminding people that the mission of the EERE's Solar Energy Technologies Office is to accelerate the advancement and deployment of solar technology in support of an equitable transition to a decarbonized power sector by 2035. To do this, our projects and programs conduct research to reduce the cost of solar electricity, making it affordable and accessible for all, enable solar to support the reliability resilience and security of the grid, and support sell solar job growth and domestic manufacturing.

The team that I lead at SETO called the Strategic Analysis and Institutional Support Team, which is a mouthful, focuses on the reducing the nonhardware or soft costs of solar, such as permitting and—the topic of today—interconnection, as well as on developing solutions to other barriers to solar deployment. A hosting capacity analysis (HCA) is one of those solutions. Interconnection requests for solar, energy storage, electric vehicles, and other distributed energy resources (DERs) to the distribution grid is growing, as we know, very rapidly. But it is not always clear how much incremental capacity the system can accommodate. Capacity can be assessed through a hosting capacity analysis, which is a process to determine the available capacity on the distribution grid for new energy resources without requiring expensive and time-consuming studies. It provides a snapshot of a distribution grid's current ability to interconnect these resources and helps inform utilities and regulators, as they make decisions about grid investments.

Hosting capacity analyses can streamline and add transparency to distributed energy resource planning and interconnection processes. There's a lot of promise here. However, hosting capacity analysis must be performed properly, or it will not be accurate or reliable and, therefore, not used. And that's why SETO supported this project, led by NREL, the National Renewable Energy Lab, in partnership with the Interstate Renewable Energy Council. I really want to thank the superstar team who work together to identify procedural and technical practices for utilities, regulators, and other stakeholders to help overcome common challenges to hosting the past of the analyses and make them accurate trustworthy and reliable.

Finally, I would really like to urge all of you to sign up for SETO's newsletter on our homepage to find out about all our funding opportunities and information about our many other events, and I will drop the link in the chat. I'm really looking forward to this webinar and thanks so much for being here.

Radina Valova: Thank you very much, Michelle. My name is Radina Valova, and I'm regulatory vice president with the Interstate Renewable Energy Council. It's been a pleasure working with the SETO and the NREL teams on this project. And we are very grateful to all the external stakeholders who have given their time and expertise to developing this paper.

A little bit about IREC before you jump in. IREC focuses on building the foundation for the rapid adoption of clean energy and energy efficiency to benefit people, the economy, and our planet. Our vision is 100% clean energy future that is reliable, resilient, and equitable. And we achieve these ends through engaging in three programs: electric sector regulatory reform, clean energy works for workforce solutions, and local initiatives, which includes things like providing training on solar and storage for code enforcement officials. And NREL and IREC chose to engage on this project, because as Michelle noted, hosting capacity is an incredibly important tool for helping us get to a 100% clean energy future. Grid transparency is an essential way of helping to provide a more cost-effective and efficient integration of distributed energy resources into the grid. Hosting capacity analysis can also serve to make the interconnection process more efficient and allow us to integrate DERs more efficiently into proactive distribution planning. So, it has many potential uses, but as Michelle noted, there are challenges associated with rolling out publicly available HCA data and whatever format it may be accessed, whether it's a spreadsheet or a map.

And unfortunately, our early experience with publicly available HCA data is that some of the first published HCA data included inaccuracies. For example, a feeder showed zero capacity, and once an interconnection application was processed, it turned out that that same feeder could accommodate multiple megawatts. And so, there was a discrepancy between what the hosting capacity analysis was producing as a number and what the reality of the distribution grid was on the ground. And one of the biggest challenges with the early rollout of hosting capacity analyses data is, in many of these instances, the utilities themselves didn't discover the data problems. It was hosting capacity analysis data users who alerted the utilities to these issues. And that's why it's so very important for us to make sure that, as states and individual utilities are adopting HCA, they do so in a way that maximizes the accuracy of the data. NREL and IREC have both been engaged on hosting capacity for several years, both through numerous state regulatory proceedings, in IREC's case, and through research and analysis for both NREL and IREC. And again, we undertook this project to make sure that when utilities develop HCA data, they do it in a way that maximizes the accuracy of the results.

Yochi Zakai: Thanks, Radina, appreciate the introduction. My name is Yochi Zakai, and I'm an attorney with Shute, Mihaly, and Weinberger LLP. I'm here today representing the Interstate Renewable Energy Council. Thank you also to Michele and Jess for the introduction, and you'll be hearing more from Adarsh in a few minutes. I'm really pleased to be able to present to you the results of the collaboration between IREC and NREL today. Can we get the next slide, please?

Our report, Data Validation for Hosting Capacity Analyses is available for a free download. Jess just put the link in the chat box, so if you haven't had a chance to look, you can get that for free now. In writing the report, our goal was to provide utilities regulators and stakeholders with a replicable roadmap to help HCA deployments provide accurate, trustworthy, and reliable results from the first day that they are published. To do this, we performed a literature review and interviewed industry experts. The literature review cover distribution system plans and HCA reports from utilities across the country. We interviewed HCA experts at utilities, software vendors, national laboratories, utility commissions, and DER developers. Building upon this research, as well as IREC's experience participating in HCA dockets at various commissions around the country and NREL's experience performing hosting capacity analysis, we developed recommendations in five areas: business processes, quality control during the model development process, validating results before publication, feedback from customers and users, and regulatory oversight.

Developing an HCA is a very time-intensive and resource-intensive process. Like any big project a utility sets out to accomplish, getting it right requires appropriate business processes. So, our recommendations begin with best practices for business processes. Our first recommendation is appointing a single project manager to ensure that there's one person ultimately responsible for the HCA's success. The managers' area of responsibility would include performing the HCA, data validation, ensuring the accuracy of results, and improving the efficiency of the HCA process. HCA managers' specific tasks would include identifying objectives for the team, providing strategic direction, establishing a standard treasure for the HCA process and associated data validation activities, and documenting the standardized HCA process. We'll discuss many of these tasks in more detail later in the presentation. The HCA manager would also establish and monitor metrics our second business process recommendation is all about metrics.

Establishing metrics to track the quality of input data and HCA results over time is important. I'm going to go over the metrics that we identify now. If you have any other ideas for metrics that would be good to track, we're going to open up the poll at the end of the slide to take a look at what you've come up with. So, feel free to put them in now as I'm chatting if you'd like. We identify the need for metrics to track, including the frequency of errors for each HCA update, the frequency of each type of build flag or check for each HCA update, timeliness (for example, whether the team completed its processes in the desired timeframe and if the HCA was published on time). The number of recurring problems in the model building process, and with which source database that problem is associated with, we'll discuss more about this in the next slide. I want to emphasize one thing about the tracking metrics; it's important to keep the data for each HCA update so that HCA managers and regulators can track if the process is producing fewer errors over time, more errors over time, or have hit a steady baseline.

Now, let's jump over to the poll and see if there are any other ideas for metrics that we could add to this list. So, we'll just wait for a couple of seconds and see if folks want to go over to the website, which is at the top there pollev.com/nrelwebinars303. And if you have any other things you'd like to track, you can feel free to suggest them here. Thanks to everybody who suggested some our metrics to track. I think we got some more good ideas. So, let's move on to the next slide.

Performing an HCA involves moving lots of data from source databases into feeder models that are used to perform the power flow analyses comprising the HCA. Source databases include distribution system asset databases, geographic information system (GIS) databases, load databases, and generation profile databases. In the paper, and in this presentation, we'll also refer to the load and generation profiles as customer consumption data. Errors in the power flow analysis used to perform the HCA are often due to data quality and integrity problems in the source databases. Efficient HCA processes fix identified errors in the source databases, so the HCA engineers are not required to fix the same errors each time they use the source database to update the feeder model. Otherwise, engineers often developed a script or another automated solution to fix the error, each time they use the source database to update the feeder model. To avoid the need to fix the same problem multiple times, it's a best practice for HCA managers to follow up with the source database owner when a root cause analysis shows that the source database includes inaccurate data or causes HCA errors.

Our final business process recommendation concerns resources. There are two types of resources required to perform and HCA: employees and computers. Let's start with people. Engineers working on the HCA should have experience with their utility distribution engineering practices, circuit models, and design standards. Engineers should be well versed in the utilities planning and operation practices. More specifically, engineers should be trained to understand the HCA methodology, their utilities implementation of the methodology, and the flow of data from the source databases to the feeder model and then to the HCA software.

Next let's turn to technology. Using appropriate computational technologies, such as high-performance cloud computers, avoids the unnecessary flow down due to hardware constraints and accelerates debugging. While portions of the HCA data validation program can be automated, engineers will always need to correct some problems. Effective HCA managers can consider how to strike the right balance between automation and manual work. Although scripting is a powerful tool and commercial software is always improving, effective HCA managers look for the point at which increased automation provides diminishing efficiency returns.

Next, I want to go over to the four broad stages of performing a hosting capacity analysis. These include model preparation (where inputs from the source database or gathered and the digital feet are models built), simulation (where power flow analyses simulate different scenarios, including increased DER penetration), and post-data processing (in this stage HCA constraints are identified from the results of the simulations). Finally, the HCA constraints are published in online portals, including maps and tabular databases such as APIs. We call this visualization. Can we bring up our next poll question? If you can head over to Poll Everywhere again, I'd like to know which stage of the HCA you think is most error prone of these four? We'll give it another 10 seconds or so to see what people say. Looks like, right now, model preparation is taking the lion's share of the votes with about 14% for simulation. Well, you and the HCA experts we interviewed for this paper agreed—feeder model development is the most error prone stage of the HCA. To address this, we've developed recommendations for creating repeatable and streamlined processes to check and correct the model input errors at each stage in the HCA process. Adarsh is going to speak more about our recommendations in that area now.

Adarsh Nagarajan: Thanks, Yochi. I was so happy to see the overlap between our findings and the attendees thinking the same. That's exactly what we found as well as we spoke with many experts from different stakeholders in industry. Quality control during feeder model development process, we all somewhat agree, is the most error prone stage in model preparation. Here we'll be using the next few slides to share some selected practices which can enable better improved quality control during the feeder model development stage.

Let's start with a simple thing, right, the need and the concept of hosting capacity analysis itself is intuitive and relatively easy to understand. However, what makes this process time-intensive and cumbersome is performing hosting capacity analysis on few thousand distribution feeders again and again. With maybe a few tens to hundreds of errors. That's what makes this process hard to follow and track. With the recommendations already presented by Yochi on top of it, in the next few slides what we will be talking about will help you understand how best we can identify the root cause issues of model preparations so that you can have higher accuracy in your results. Next slide, please.

It turns out that a large portion of the hosting capacity analysis maps development process involves scripting and coding. Which can also be referred to as software development or programming, depending on the intensity of coding involved. Coding involves a team of coders maybe, maybe one person, maybe a certain project management, a giant programming team, or could be any other idea. Another critical effort that's involved in hosting capacity analysis is the pursuit towards automation, which is again enabled by scripting and coding. Developing scripts that are based on good coding practices are critical. Even in the case of using commercial tools, hiring and training engineers is critical. In the case of homegrown solutions (which means as a utility you develop your own scripts and programs) good coding practices will save time in the long term. So, use code-based management tools, such as GitHub for example. Also use appropriate professional steps, such as branching codes, making edits, debugging the code to make sure it's good. Then effective merging with the source database will help with version issues. And also, between the different engineers who add and modify scripts, there'll be a better streamlined process. Managing the central code base and having a central code base code manager (which includes training coding teams) will alleviate pain points in the long term. Next slide please.

So, let's start with the very first step in hosting capacity analysis, which is feeder model development. Everything starts with developing a model of the feeder. And this can be referred to as baseline feeder, which means the purpose is to accurately reflect the physical feeder that exists in the model. So, we want to match the model with the physical feeder that exists, that's the first step. Confirming accuracy in the very first step and we baseline the distribution feeder, avoids unnecessary work for the team later in the process. Suppose we do not accurately match or validate the baseline feeder; we will have the wrong results at the last stage. In that case, we must go back all the way and restart, which is time-intensive and cumbersome. Next slide.

As we were interviewing experts, we learned there may be few thousand feeders, or maybe few hundred at least, going through this process. Prioritizing the screening process is critical. Say you have a model already converted into your appropriate tool or software; you want to simulate them to make sure the feeder is accurate. You need to prioritize the screening process. Which means it's not wise to run chronologically from Jan. 1 to Dec. 31 for all thousand feeders and then look at the errors. It's better to look at the critical load hours and fix the issues. Critical load hours may be summer peak, summer minimum. Springtime maybe, which has low load and high solar? All these scenarios should be simulated first in on the baseline feeder without adding any additional solar, just what exists as of now. Then, make sure you look at the key parameters, which you will look at in the next slide. Next slide, thank you.

Once you have the baseline run for critical hours, then you can look at some of these validation checks. The table that you see on this slide is a set of down-selected attributes which a team can look at. And you can also automate, so that one doesn't have to put too much effort in. One can start looking at the voltage base. Typically, utilities, as they run through these, they look at per-unit voltage. That means you need to have appropriate voltage base that matches the value of your utility feeders. It could be 13 kilovolts (kV), 12.47 kV, 11 kV, 4 kV, or 35 kV, or whatever, right. Whatever your value is, make sure it matches the substation value. You should also look at the nodal voltage violations at critical load periods. For example, you may have voltage issues around springtime in your region. In that case, you should look at those times and see whether there are voltage violations at all that matches your local understanding. If you start seeing voltage violations at minimum load that you should expect, then it's good. If you don't expect any violations, and you see them in your simulation, that's a checkpoint. Look for device overloads, including transformers and conductor thermal ratings. For example, you can look at transformer loading, for whatever reason, you see many transformers at 150% loaded. Which is not the scenario you are expecting. That can be a good checkpoint to go back and perform root cause analysis.

Another parameter, which is less talked about but is very important, is to check the power factor at the feeder head at peak and minimum loads. Power factor is the ratio of active power to reactive power. You want to make sure there is the right amount of reactive power being consumed by all the aggregate loads. If you see very-out-of-range values (e.g., 0.7 power factor) or very leading power factor, that means something is wrong in your load allocation and load values. Also make sure you check the aggregate power losses at the feeder head and overall feeder, and make sure you understand what percentage of load is being served. And what percentage of that is the power loss. And make sure you look at the right active power consumption at the allocated peak load. Some of these can be automated, maybe most of these, and make sure you look at these before we say something is good. Next slide.

Moving on. Once we have a baseline feeder, once we run critical hours, once we screen them based on some suggestions from the previous slide, here we can identify a standardized approach to resolving errors. Say you run a thousand feeders, and a few hundred of them, or maybe 50, 10, whatever, may have issues. How do you screen them? How do you resolve these errors? Right, that's the next content we will be talking about. It's important to develop and follow a standardized approach to resolving errors. As validation checks are performed, it's quite complicated if you don't have a right approach, so that's what this slide is about. Our recommendation is that you develop a tracking tool. A tool which is not too complicated, just an Excel sheet is fine. A tracking tool to help you monitor which, and how many, circuits that you have already checked have passed, have produce warnings, have errors, or have failed. Once you have an idea of how many feeders have issues and how many don't, then you can batch feeders with similar challenges. Then how the team of engineers take advantage of the systematic and repeatable issues so you can solve them together.

For example, the tracking tool that we recommend could be as simple as this table. Once you run screening on selected feeders, you can look at how many of them completed and passed the stage. How many of them fail entirely (just stopped), and how many of them completed but with errors? Errors could be voltage issues, could be transformers overloaded, could be having a bad power factor, higher losses than expected? How many of them just stopped? Maybe because of some iteration issues or something else altogether? How many of them are in progress and, for whatever reason, they're taking too long? So, as you make a little tracking tool even as simple as this, it may help you, and you can start batch processing them and take advantage.

Once we have done this batch processing, the next step is to look at the root cause analysis. Say you have some voltage issues or shorts circuit currents are too high. Or your power factor is not good, or your losses are moving at 20%, right, so high. So, in that case, how do you perform root cause analysis to resolve errors? That's what we'll talk about in the next few slides.

As we interviewed and spoke with subject matter experts on this topic, we understood there are many ways to address this. However, one way we realized that we could recommend, and hope can help you all is, we can streamline the root cause analysis by grouping issues into one of the four categories listed here. The issue may be coming because of topological errors, which means your GIS issue data can be wrong. It could be because of wrong equipment database. That means the data basically have for transformers or regulators, maybe wrongly updated may have outdated data, maybe erroneous or whatever reason. Maybe you have wrong conductor information, could be the impedances are wrong. Or it may be that your customer consumption and generation data is wrong, which means your load allocation is wrong. So, in the next few slides we'll get into one level deeper to each of these so that we all have some clear understanding.

Let's look at topology verification. Once you run and you see something is wrong in the feeder, you can actually go through this process very quickly. You can look for unintentional islands. If there are islands that are happening, where electricity is not flowing because of a wrong conductor, missing conductor, or a missing node, or the switches unintentionally opened? Look at unintentional meshes. Very few feeders typically have meshes in U.S. utilities, so make sure measures should exist or not. Look for incorrect phase loadings. This is a little complex to check for. Take a look at the voltage drop for each phase individually and see whether the variation and the difference between each phase (all data drop) is not too far from what you should be seeing. Look for feeders switching states so that you can have the right configuration that you are running. Look for voltage correction equipment being properly managed, and make sure you have the existing DERs, so solar and storage, properly modeled.

The second set of issues you can look at is equipment verification. You can look for substation data and default settings. Make sure you have the right model of substation transformer. Should this be a load tap changer? Should this be a fixed transformer? Take a look at it and make sure you have the right impedances modeled. Make sure you have the right voltages. Take a look at the line regulators. Line regulators can have complex control settings. Take a look at the time delays and which phase is it connected to? Is it all three phases, individual phases? Take a look at capacitors. Make sure you have the right voltage triggers, time delays, control mode, seasonal variations all properly model.

Conductor verification is the next stage you can look at. Make sure you have the right impedances, which means you may have the right value, but you may have the wrong unit. For example, capacitor can be siemens or farads, right. To make sure you have the right values, is it mhos, the line resistances are right, how do you check for that? Because each feeder may have a few thousand conductors. You can look for reactive power losses; you can look for acute losses; you can afford voltage drop at peak load, short circuit currents. And all of this will help you quickly bucket whether the issue is coming from wrong conductor information.

Last, but not the least (rather very important) is customer consumption and generation profile verification. For customer consumption profiles, the best dataset that we learn, (and also, we recommend), is using AMI if you have AMI. AMI stands for advanced metering infrastructure, as many of you may know already. It's typically used for billing purposes and fairly accurate. If you have AMI, then that's perfect. Although it may look effortful to pull the information, it certainly adds value. The other way to look at customer consumption is feeder-heads-SCADA measurements, which is a common approach. Dividing feeder consumption across all customer classes is an important part.

Making sure that different customer classes—such as residential, commercial, and industrial— having different customer consumption profiles is also important.

The second component that makes the customer consumption is their generation, which is DER. They may have solar; they may have storage. Having the right value, right size, and making sure that the solar data that you're inputting is from your local area. If you have AMI, you have production meter, you can use that as well. If you don't, you can have local solar management monitoring, and you can use that. Compiling all this data is time-consuming but it's very critical we do it; otherwise, the baseline is wrong, and then everything else we do on top of that will be wrong as well. Two proxy attributes you can look at to validate your profiles are active power location. Which is, you can look at how active power and reactive power consume and the ratio between them. And also making sure that the coincidental peaks for your locational feeders match what you're looking at.

All in all, here is the next slide we came up with to illustrate how you can use all that we have been talking about the last 30 minutes. There's an illustration. The way typically this goes is you develop a baseline feeder. And you want to make sure that baseline feeder matches what exists in the real world. How do you make sure it happens? You prioritize. You run selected critical hours such as peak load, minimum load, or whatever you know, are important to your local region. Then check for voltages, loading, circuit losses, circuit reactive power, aggregate active power consumption, check all that. Then follow a standardized process.

You look at dissolving errors, you look at batching the feeders that have similar issues. Once you have batched the feeders with similar issues, then do a root cause analysis. Look at topology, equipment, conductor, customer generation profiles and then look at question one. Is any source data erroneous? Is the issue coming from GIS? Is the issue coming from customer consumption data or SCADA? Is there an issue coming from something that is local database of your feeder utility that you're using? If it's yes to that, as Yochi clearly described, trigger a business process to fix the source database. Then go back and re-run the baseline feeder, and then everything starts again.

If the issue is not the source database, the issue is your scripts (which can be referred to as software as well), you need to fix that. Go debug it, branch out your code database, debug and fix it. Then merge the debugged source code, and then create your feeder and do the analysis. It's very critical we have these four steps very well validated. Although it takes time, it's important we go through this. Once everything is good, then we go with long-term simulations, scenario generation, such as adding more solar (PV) for hosting capacity analysis and so on. This is one way to illustrate all that we have spoken in one slide. The next part will be about validation of results, and Yochi will be taking care of that.

Yochi Zakai: Yeah, thanks, Adarsh. Next slide, please. So, we recommend that after the utility performs the power flow simulation, it establishes a process to flag irregularities that will trigger a review before publication. This table provides a consolidated list of triggers and validation procedures. Most HCA experts we interviewed perform pre-publication reviews. But they noted that the feeder model building process is most commonly the root cause of the errors identified in the visualization and data publication processes. So, I want to go over what's in the table for a second, and just note that you know it essentially just looks at what are the results that are provided. Are there any that are zero, or null? Or is there anything that's odd and duplicative? But, at the end of the day, this is just a check before publication. It's really the errors that will occur earlier in the process, that you will then have to track down with the root cause analysis and things. Next slide, please.

We're going to look at customer feedback and regulatory oversight now. It's a best practice to accept feedback from users of the HCA. This is because customers can identify errors that the utility is not aware of and suggested improvements that the utility might not think of on its own. For these reasons, we recommend that utilities provide a mechanism to allow customers and HCA data users to provide feedback about their user experience, any errors identified, and the usefulness of the HCA data. Utilities can track the feedback they receive and report on any action taken as a result of the feedback.

Next is regulatory oversight. We recommend that regulators require utilities to prepare a data validation plan that describes the utility data validation processes. The written plan should identify the HCA manager and describe the person's responsibilities. Describe employee and computational resources devoted to the HCA. Describe the utility standardized approach to resolving errors, including the prioritized screening process and the process used to verify the baseline monitor model feeder topology, equipment, conductors, and customer consumption profiles. And describe how the utility validates results before publication as I've just discussed. We also recommend that regulators require periodic reports of metrics that monitor the utilities performance and accuracy of the HCA results. The reports could include summaries of how many circuits passed, produced warnings, or failed at each milestone in the HCA process. The results of the root cause analysis for recurring problems and what improvements have been identified to fix those recurring problems. If the actions identified to fix the recurring problems don't fix it at the source, the report should identify the reasons why the problems could not be fixed at the source. And, finally, you can summarize the feedback provided by customers.

And with that, I think we're going to turn to questions. I can start. I noticed that Bernardo had asked the question about how much inaccuracy or percentage of an error is acceptable in the feeder modeling stage. I don't have a specific number or percentage, but I do want to talk about metrics which I just mentioned, which I think are really helpful. So, if you're tracking the number of errors that you have at each stage, and the root cause associated with error, you can track in your updates if your analysis is trending to produce less errors at each update. And if you're identifying and resolving the errors at the root cause, ideally those metrics should trend down, and you'll see less errors over time. Adarsh, do you have anything else to add to that?

Adarsh Nagarajan: Yeah, that's a good answer, Yochi. I also have responded to the question right there. It depends on the attribute. For example, if the voltage is 1% or 2% is good. For active power, much less than that is good. So, you look at the parameter, right, the circuit losses aggregate should not be more than 3% to 4% in U.S., for example, right. So, as you look at parameters, each parameter as a range and an experienced engineer will be very quickly able to identify those and put those ranges in place, and then a buffer around that is what you look at. It depends on the attribute.

Yochi Zakai: Yeah, great. Another question that we got a bunch of times, so I think we should take time to address is, "Which software is used to perform the HCA?" And so, I want to identify that, at least in my work, I've seen the SIM and Synergy HCA used most often, particularly because those are you know distribution modeling software, in which you can both build a feeder model and perform an HCA analysis. I also see a lot of investor-owned utilities, using EPRI Drive software. But I would note that the EPRI Drive software doesn't build the feeder models; it just performs the hosted capacity analysis. So, you'll have to be you know different distribution modeling software to build a feeder model. That creates an extra step to transfer the feeder models to drive for the analysis, and I know Adarsh has worked with a couple of other software companies for this as well.

Adarsh Nagarajan: Yeah, you mentioned all of them. Some utilities, you know use a software to convert GIS into a format that's usable for hosting capacity, some time to it's all interconnected. There are many homegrown solutions and many existing startup companies who are actually doing a great job but may not be as popular as Synergy our SIM right. There many you can use,. The idea is asking the right questions. Whatever tool, you want to use go ahead and ask the right questions, ask what do you want for yourself, because the report explains very well on that hosting capacity itself. As you all intuitively understand, there are many details that make this very interesting, so I understand what processes and tools use. Train your engineers. Those are the steps that are very important, as much as what tool you use.

Yochi Zakai: Great and I'll answer a question from Brian now who asked about HCA methodologies and if there's an agreement or standards. And the methodology used tends to vary based on the use case. I'd encourage you to look at one of IREC's recent papers called Key Decisions for Hosting Capacity Analysis, which goes into more detail on the various decisions that go into developing an HCA including a discussion of the different use cases. And I know briefly that the most common use case we've seen so far is to guide interconnection decision making. In California, the HCA is also used in the interconnection screening process itself and the Minnesota Commission has established a goal to work towards using hosting capacity analysis in the interconnection screening process also. When you're looking at the interconnection use case, as opposed to distribution planning or locational value, the iterative methodology is the most common one used and every drive tool labels this centralized methodology, so there are a couple different ways to say. Adarsh, do you want to answer any of the more technical questions that have come through?

Adarsh Nagarajan: I see one question from Romero regarding fuse ratings during HCA. So, it's a very good question. Hosting capacity analysis, as it's currently defined, does not look at production. Production settings happen afterwards because production doesn't change for every 5 kW you add, so ratings are important. But the existing hosting capacity analysis tools do not do a fuse rating analysis for hosting capacity analysis. But what I meant by short-circuit currents is understanding when there is a fault fuse used in the feeder. What are the currents that flow? And that helps you understand whether the conductor impedances are in the range or not in the range. For example, with any tool (Open DSS, Synergy or SIN) if you conduct a short circuit, they put a deadbolt fault on different conductors from the first conductor to the last conductor and give you a short circuit currents at different locations on the feeder. That's what I meant, not the fuse ratings. And generally, fuse ratings on not looked at for hosting capacity analysis. Whenever you run this and a separate process that's used for reconfiguring fuse ratings, and that's not done very often.

Yochi Zakai: If you want to keep going, that's fine; otherwise, I have one that I can answer.

Adarsh Nagarajan: You go ahead. We can take turns.

Yochi Zakai: Yes, Stephen asked about small municipal electric providers that might not have personnel capable of performing the kind of robust HCA that we're talking about here. And you know, I think this really comes back to, you know, the resources that each utility has, and there are a lot of consultants out there that can help with this type of process. And I know NREL has helped some utilities in performing hosting capacity analysis as well. And so yeah, if in-house options aren't good, I think there are, you know, there are consultants and other folks available to help. I know that one consumer-owned utility, The New Hampshire Electric Cooperative, that took this approach and hired a consultant to perform their hosting capacity analysis. Why don't you go ahead with another question, Adarsh?

Adarsh Nagarajan: Yeah, I see another quick mention that mainline fuses influencing evaluation, which is also correct. That's true. That's something that's very critical, could be a mainline fuse, could be a recloser, could be different structures used. But yes, it's important we do that. And another question I saw about upgrading. As a conductor is upgraded, it's important that we use the business process to make sure the GIS has the updated data. Then the hosting capacity analysis process is good. You don't need to do anything in your code base to address that shift or change. And whether the unlimited, the utility upgrade of any sort, it's very critical the business process manager ensures that that sort of an update has been already done in the source database in the appropriate way. That will actually help in the long term.

Yochi Zakai: And I see that Tyler asked the question that maybe you could answer. "What do you mean by checking aggregate active power in the baseline model validation?"

Adarsh Nagarajan: That means ... okay, again, I'll give you a simple answer, right. You have a feeder, and you know the peak load is 3 MW based on your experience in the utility. Whereas as you allocate the load of that, say, 3 MW across 400 or 1,000 houses—it's important we double-check it. As we allocate load that supposed to be consumed at peak load across many distributed homes, or commercial or industrial, the aggregate of all the consumption that's happening at all the houses and commercial and industry should match what you expect at the feeder head. That's what aggregate active power means or what we are referring to as. Hope it's not too confusing.

Yochi Zakai: And now we're ready to time, but we have one more question left, so I think we might as well get them all out. So, the last one was asking about calculating how much load can be added, such as from electric vehicles or you know, maybe batteries that are charging from the grid or electrification. I will point out that most hosting capacity software has the ability to evaluate the addition of loads as well as the addition of generation. But at least in California, where that analysis has been performed, we have actually continued to see some errors in the analysis. And so, I would say that that is an area where this whole data validation process that we're discussing now still needs to be performed a little bit more. And to date, at least, hasn't been the focus. And we're still seeing errors occur in that process, so I think that there is room for improvement.

Jess Townsend: Alright, I dropped some email addresses (if there are follow-up questions) into the chat for everyone as well as that follow-up survey and questionnaire that I referenced at the beginning of the webinar. And otherwise, we just want to say thank you to everyone, including our panelists, for joining us today. Thank you so much. We will be emailing these links as well as the final presentation and the recording to all our attendees today. Thank you so much. Have a good day, everyone.

Radina Valova: Thank you very much.

Yochi Zakai: Thanks for joining us. Bye.


Share