content
stringlengths
71
484k
url
stringlengths
13
5.97k
surface area units: |From unit | Symbol |Equals Result||To unit | Symbol |1 cord cord||= 0.0000015||square kilometers km2 , sq km| Prefix or symbol for cord is: cord Prefix or symbol for square kilometer is: km2 , sq km Technical units conversion tool for surface area measures. Exchange reading in cords unit cord into square kilometers unit km2 , sq km as in an equivalent measurement result (two different units but the same identical physical total value, which is also equal to their proportional parts when divided or multiplied). How many square kilometers are contained in one cord? To link to this surface area - cord to square kilometers units converter, only cut and paste the following code into your html.
http://convert-to.com/conversion/area-surface/convert-cord-area-to-square-kilometers-km2.html
Hurricane Olaf has strengthened into a major category four hurricane as it moves far southeast of the Hawaiian islands, forecasters say. The National Hurricane Center said in its 2 p.m. PT advisory on Monday that maximum sustained winds of Olaf had increased to near 130 miles (215 kilometers) per hour, making it a major category four hurricane. Olaf was only a category two hurricane just hours earlier. The eye of Olaf is located about 1,280 miles (2,065 kilometers) east-southeast of Hilo, moving in a westward direction at 12 miles (19 kilometers) per hour. This track takes the hurricane closer to the Hawaiian islands, but long-term forecasts expect the hurricane to turn towards the north later this week. “Olaf has continued to rapidly intensify today. The hurricane has a classical appearance on satellite imagery with a small, clear eye,” said NHC senior hurricane specialist Michael Brennan. “Olaf has strengthened 45 knots (52 mph/83 km/hr) in the last 24 hours, and some additional strengthening is still possible in the next day or two.” Olaf is expected to stay a major hurricane through Friday morning.
https://www.streetwisejournal.com/hurricane-olaf-strengthens-into-a-category-4-hurricane-far-off-hawaii/
speed and velocity units: |From unit | Symbol |Equals Result||To unit | Symbol |1 kilometer per minute km/min||= 1,574,803.15||yards per day yd/d| Prefix or symbol for kilometer per minute is: km/min Prefix or symbol for yard per day is: yd/d Technical units conversion tool for speed and velocity measures. Exchange reading in kilometers per minute unit km/min into yards per day unit yd/d as in an equivalent measurement result (two different units but the same identical physical total value, which is also equal to their proportional parts when divided or multiplied). How many yards per day are contained in one kilometer per minute? To link to this speed and velocity - kilometer per minute to yards per day units converter, only cut and paste the following code into your html.
http://convert-to.com/conversion/speed/convert-km-per-minute-to-yd-per-day.html
How Many Miles Is 13.5 Km You’re probably wondering how many miles are in 13.5 kilometers. First of all, the kilometer is the metric unit of length. It is 1000 meters in length. This unit is used to calculate land distances in most countries. For example, 13.5 km equals 8.389 mi. The conversion formula will vary depending on the distance you are measuring. This article will discuss imperial and metric distances. The standard unit of length in the United States is the mile. It is equal to 1,760 yards, 5,280 feet, and 1.609344 kilometers. The mile symbol is “mi”. To convert 13.5km to miles, you just need to write 13.5km in either the British Imperial Units or the US Customary Units. You can also use the English letter “mi”, to write 13.5 miles. Another quick way to convert kilometer per hour to miles per hour is to use a metric speed calculator. These programs are easy to use. Simply enter the value in the left-hand field and click CONVERT. The result will be displayed in the right-hand field. To determine how many miles per an hour is 13.5 kilometers, you can use the formula below. This way, you can figure out the speed in mph or kilometers per hour.
https://sonichours.com/how-many-miles-is-13-5-km/
In its short life, Hurricane Michael has certainly developed quickly — it only achieved tropical storm status just four days ago on Sunday, October 7. And now, according to the latest bulletin from the National Hurricane Center (NHC), as at 12:00 UTC today, Michael has strengthened further, into an extremely dangerous Category 4 major hurricane, with maximum sustained wind speeds of 145 miles per hour (233 kilometers per hour). The 09:00 UTC NHC bulletin placed Michael at about 140 miles (225 kilometers) south-southwest of Panama City, Florida, tracking north at 13 miles per hour (20 kilometers per hour). Hurricane-force winds currently extend outward up to 45 miles (75 kilometers) from the center and tropical-storm-force winds extend outward up to 185 miles (295 kilometers). According to the latest RMS HWind Forecast Storm Track Probabilities and Deterministic Scenarios chart (pictured below), Panama City has a 95 percent probability of the center of Michael passing within 50 miles. This entry was posted in Flood, Hurricane, Natural Catastrophe Risk, Risk Modeling, RMS and tagged Category 4 Hurricane, Florida Hurricane, Florida Panhandle, Hurricane Michael, Michael on October 10, 2018 by James Cosgrove.
https://www.rms.com/blog/tag/category-4-hurricane/
1 March 1925: Ryan Airlines Incorporated, founded by Tubal Claude Ryan and Frank Mahoney, began a regularly-scheduled passenger airline service, the Los Angeles–San Diego Air Line. The airline connected San Diego and Los Angeles, the two largest cities in southern California. One of the airplanes used the Douglas Aircraft Company’s first airplane, a Davis-Douglas Cloudster, which was modified to carry as many as ten passengers, and three Standard Aero Corporation J-1 trainers, each modified to carry four passengers. Scheduled flights departed Los Angeles for San Diego at 9:00 a.m., daily, and from San Diego to Los Angeles at 4:00 p.m., daily. The fare for a one-way flight was $14.50, and a round trip was $22.50. The photograph below (from the collection of the San Diego Air and Space Museum) shows opening day activities at Dutch Flats, near the current intersection of Midway Drive and Barnett Avenue, in the city of San Diego. The Davis-Douglas Cloudster was the first airplane built by the Douglas Airplane Company in Santa Monica, California. Donald Douglas’s investor, David R. Davis, had asked for an airplane to attempt a non-stop cross country flight. The Cloudster was built by the Davis-Douglas Company at Santa Monica, California. It was a two-place, single-engine, single-bay biplane. It was 36 feet, 9 inches (11.201 meters) long, with a wingspan 55 feet, 11 inches (17.043 meters), and height 12 feet, 0 inches (3.658 meters). Its gross weight was 9,600 pounds (4,355 kilograms). The Cloudster was powered by a water-cooled, normally-aspirated 1,649.34-cubic-inch-displacement (27.028 liter) Liberty 12 single overhead cam (SOHC) 45° V-12 engine, which produced 408 horsepower at 1,800 r.p.m., and drove a two-bladed, fixed-pitch propeller. The Cloudster had a cruise speed of 85 miles per hour (137 kilometers per hour), and maximum of 120 miles per hour (193 kilometer per hour). Its normal range was 550 miles (885 kilometers), but when equipped for the transcontinental flight, its range was increased to 2,700 miles (4,345 kilometers). The Cloudster first flew on 24 February 1921. It was the first airplane capable of lifting a payload greater than its own weight. The airplane was flew 785 miles (1,263 kilometers) in 8 hours, 45 minutes, when a timing gear failed, forcing a landing in Texas. The airplane was shipped back to Santa Monica for repairs. Before another attempt could be made, Lieutenant John Arthur Macready and Lieutenant Oakley George Kelly, United States Army, made a successful non-stop flight with a Nederlandse Vliegtuigenfabriek Fokker T-2, 2–3 May 1923. After this, Davis pulled out of the company. The Cloudster was sold to Ryan for $6,000. During Prohibition,¹ the Cloudster was used to fly contraband alcoholic beverages into the United States from Mexico. In December 1926, it made a crash landing on a beach near Ensenada, Baja California, Mexico, and was damaged beyond repair.
https://www.thisdayinaviation.com/tag/donald-douglas/
Mars Pathfinder, now eight days away from landing on the surface of Mars, performed the last of its scheduled trajectory correction maneuvers at 10 a.m. Pacific Daylight Time on Wednesday, June 25. The correction maneuver was performed in two phases occurring 45 minutes apart. The first burn, lasting just 1.6 seconds, involved firing four thruster engines on one side of the vehicle. The second burn lasted 2.2 seconds and involved firing two thrusters closest to the heat shield. The combined effect of both burns changed Pathfinder's velocity by 0.018 meters per second (0.04 miles per hour), which places the spacecraft on target for a July 4 landing in an ancient flood basin called Ares Vallis. Pathfinder is scheduled to land at 10:07 a.m. PDT (in Earth-received time). The one-way light time from Mars to Earth is 10 minutes, 35 seconds, so in actuality, Pathfinder lands at 9:57 a.m. PDT. If necessary, a fifth trajectory correction maneuver may be performed just before Pathfinder hits the upper atmosphere of Mars. The maneuver would be carried out either 12 hours or six hours before Pathfinder reaches the atmosphere at 10 a.m. PDT in Earth-received time. The flight team will make a decision to proceed with the final correction maneuver the evening before landing. A final health check of the spacecraft and rover was performed on June 20. All spacecraft systems, including science instruments and the critical radar altimeter, remain in excellent health from the last check about six months ago. The rover received a "wake up" call, woke up on command from the lander, then accepted a software upgrade. Flight controllers next loaded the 370 command sequences that will be required by Pathfinder to carry out its surface operations mission. The spacecraft is now ready to begin its entry, descent and landing phase. It will be commanded into that mode at 1:42 p.m. PDT on June 30 by an onboard sequence. Mars Pathfinder is currently about 180 million kilometers (111 million miles) from Earth and about 3.5 million kilometers (2.2 million miles) from Mars. After 202 days in flight, the spacecraft is traveling at about 18,000 kilometers per hour (12,000 miles per hour) with respect to Mars.
https://mars.nasa.gov/MPF/mpf/status/mpfstatus_062797.html
Santo Domingo, September 19 (EFE) .- Hurricane Fiona, category 1, entered the Dominican Republic on Monday through Cabo San Rafael (east of the country), with winds of 140 kilometers per hour and gusts more strong, reported the National Meteorological Office (Onamet). In its latest bulletin, Onamet explained that according to satellite and radar imagery, Fiona entered this local morning and at 04:00 local time (08:00 GMT) was approximately 20 kilometers southwest of Punta Cana. Moderate rain are recorded there. at strong, maximum sustained winds of 79 kilometers per hour, with a gust of around 124. This hurricane, the third of the current hurricane season, is moving west/northwest at around 15 km/h. Hurricane-force winds extend about 45 kilometers outside its center and storm-force winds about 220 kilometers. If it maintains its trajectory, the center of Fiona will move in the next few hours through different provinces of the country. Satellite images show dense and compact cloud activity that is generating moderate to heavy showers and storms in several provinces of the country and parts of Greater Santo Domingo, Onamet adds. Accumulated rainfall is forecast to vary between 100 and 300 millimetres, although it may be higher at isolated points and reach around 450 millimetres. Hurricane Fiona keeps the country on alert Faced with this situation, the alert for urban flooding, flooding of rivers, streams and ravines, as well as landslides in several provinces of the country, is maintained. Much of the Dominican Republic will be under the effects of the hurricane, so the whole country is on alert (thirteen provinces in red, the maximum, including Greater Santo Domingo and eighteen in yellow). This Monday has been declared a holiday and there is no class. It is expected that when Fiona leaves the Dominican territory, she will do so with a category 2 on the Saffir-Simpson scale, out of a maximum of 5. Hurricane Fiona arrives in the Dominican Republic after hitting Puerto Rico on Sunday, where it caused damage described as “catastrophic”, a general power outage and extensive flooding. Edited by Rocio Casas .
https://nationalmalldesign.org/2022/09/19/hurricane-fiona-enters-the-dominican-republic/
Updated: As predicted, Tropical storm Aletta has developed into a hurricane, making it the first to be registered in Mexican waters this season. Hurricane Aletta was being monitored in the Pacific earlier in the week and was predicted to develop into a hurricane by Thursday or Friday. On Thursday, Aletta officially became a Category 2 hurricane, making her the first of the 2018 season. At 11:00 p.m. Thursday, the National Weather Commission reported her advancement into a Category 2 storm with maximum sustained winds of 155 kilometers per hour and gusts of 195 kilometers per hour as she slowly moves west-northwest at 9 kilometers per hour. Hurricane Aletta was situated 740 kilometers west-southwest of Manzanillo, Colima and 365 kilometers south-southeast of Isla Socorro of the Mexican coastline. Meteorologists say that Aletta does not pose any danger to Mexico as she continues to head out to sea. Update: Since the morning hours of Friday, Aletta has developed into a Category 4 hurricane. The National Meteorological Service reports Aletta has winds of 195 kilometers per hour and gusts exceeding 240 kilometers per hour. Hurricane season for the Eastern Pacific runs May 15 to November 30 and starts June 1 in the Central Pacific, beginning two weeks earlier than the official season for the Atlantic Basin. The first tropical depression for the Pacific formed May 10. Aletta is the second tropical depression and the first named hurricane for the Pacific this season.
https://www.riviera-maya-news.com/mexico-registers-the-first-hurricane-of-2018/2018.html
A new railway line that connects two key cities of Southwest China and plays an important role in linking the region to the ASEAN nations of South East Asia was officially launched on the 26th of December 2022. The Chengkun railway runs a total of 915 kilometers from Chengdu, in Sichuan province, to Kunming, in Yunnan province, with intermittent stops that include Meishan, Leshan, Liangshan, Panzhihua and Chuxiong. From Kunming, the network continues on to Laos and beyond to Thailand, Vietnam and ultimately Singapore. This class I double-line electrified rail route travels at speeds of up to 160 kilometers per hour and will make a significant impact on travel times, reducing the journey from Kunming to Panzhihua, Xichang and Chengdu to 2.5 hours, 4 hours and 6 hours respectively. Traffic on the original line has increased substantially in recent years, and this new track will thus offer much-needed additional capacity to meet the demands required of such a busy route. Work on the original Chengdu-Kunming line began in 1958, requiring 12 intensive years of construction before finally opening in 1970. The complex topography of this route, which runs through significant mountain ranges, meant it has sometimes been referred to as a 'geological museum'. The immense amount of manpower that went into engineering the line have seen it recognized as one of the crowning achievements of the China's vast rail network. Like its predecessor, this new line has required incredible engineering and labor efforts in order come to fruition, crossing multiple rivers and traversing the Emei, Liangshan and Hengduan mountain ranges that lie between the two cities. Unavoidably given the rugged landscape, the tracks pass through multiple tunnels, the longest of which is 22 kilometers in length, as well as crossing many purpose-built bridges that span the region's famed rivers. Booking for the new line was opened on December 24. Those wanting to reserve tickets can do so via the usual official channels, such as the 'China Railway' WeChat account or the 12306.com website.
https://www.chinakunming.travel/en/blog/item/4595/kunming-and-chengdu-now-connected-by-high-speed-railway
HCMC – Super Typhoon Noru, the fourth storm to hit Vietnam this year, has been strengthening since last night and is expected to intensify further as it is headed for the coast of Central Vietnam, said the national weather forecast center. The typhoon was in the southern part of Vietnam’s Hoang Sa (Paracel) Islands and some 360 kilometers east of the mainland from Danang to Quang Ngai in the central region as of 7 a.m. today, September 27. The storm brought strong winds at 150-183 kilometers per hour, gusting at level 17. Noru is moving westward at 25 kilometers per hour in the next 12 hours and is likely to strengthen. By 7 p.m. today, the eye of the typhoon could be spotted around 170 kilometers southeast of Danang and some 120 kilometers east of Quang Nam and Quang Ngai, packing winds up to 183 kilometers per hour. Noru may keep moving in a westerly direction at 20-25 kilometers per hour and make landfall in the next 12-24 hours. The system could maintain the same direction at about 20 kilometers per hour to deeply enter the mainland before weakening into a tropical depression and then a low depression area in the Thailand area in the next two days. Aside from bringing strong winds, Noru may be accompanied by high tides, extremely rough seas and flooding across coastal areas, including those from Quang Binh to Ninh Thuan, from Binh Thuan to Ca Mau. Also, it could dump heavy rains in Quang Tri, Thua Thien-Hue, Danang, Quang Nam, Quang Ngai and Kon Tum from today to tomorrow. Torrential rains are likely to expand to north-central provinces and the southern part of the northern region.
https://english.thesaigontimes.vn/super-typhoon-noru-rapidly-intensifies/
Image Above: Hurricane Dennis was bearing down on the Gulf Coast of the United States on July 10, 2005, at 12:15 p.m. (16:15 UTC) when the Moderate Resolution Imaging Spectroradiometer on NASA’s Terra satellite captured this image. With winds of 135 miles per hour (217 kph), Dennis was a powerful Category 4 storm just hours away from making landfall. At the time this image was taken, the eye of the storm was about 55 miles (90 kilometers) south, southeast of Pensacola, Florida, and the storm was moving northwest at about 18 miles per hour (29 kph). The size of the storm put clouds of rain over most of the southeastern United States well before the storm came ashore. In this image, Dennis covers all of Florida, Alabama, Mississippi, and stretches over parts of Louisiana. The northern fringes of the storm appear to be over Tennessee and North Carolina. Click here to view a high resultion image. Image Above:More than a million are evacuating the coastal areas of Florida and Alabama as Hurricane Dennis steadily approaches. The first hurricane of the 2005 Atlantic hurricane season, Dennis has already been a deadly storm. It crossed over Cuba on July 8 and 9, leaving at least 10 dead, and caused additional deaths in Haiti. After re-emerging over open water, Dennis re-strengthened into a dangerous Category 3 hurricane with winds approaching 115 miles per hour when this image was taken at 2:45 p.m. EDT on July 9, 2005. The Moderate Resolution Imaging Spectroradiometer on NASA’s Aqua satellite captured this image of the storm sliding up Florida’s west coast. The National Hurricane Center warns that Dennis continues to strengthen and may become a powerful Category 4 hurricane before making landfall over the northern Gulf Coast on July 10. Click here to view a high resolution image. Image Above: Hurricane Dennis threaded its way between Jamaica and Haiti on a direct course for Cuba on July 7, 2005. The storm now has the distinctive hurricane form, with a well-defined eye surrounded by bands of swirling clouds. At 10:50 a.m. local time (15:50 UTC), when the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite took this image, Dennis was just below a Category 3 hurricane, with winds of 175 kilometers per hour (110 miles per hour) and stronger gusts. Less than an hour before this image was taken, the storm’s small dark eye was about 105 kilometers (65 miles) northeast of Kingston, Jamaica and 170 kilometers (105 miles) south-southeast of Guantanamo, Cuba. The National Hurricane Center reports that Dennis is traveling northwest at about 24 kilometers per hour (15 mph). A storm of this size is a threat not just because of its powerful winds: Dennis is expected to produce heavy rain and coastal and inland flooding. Five to ten inches of rain may fall over Haiti, the Dominican Republic, Jamaica, Cuba, and the Cayman Islands, with as much as 15 inches falling in parts of Jamaica. Heavy rainfall can trigger flash floods and mudslide in mountainous regions. The storm will probably also raise tide levels by five to seven feet and generate large and dangerous waves. Dennis is expected to strengthen as it moves north towards the Gulf Coast of the United States. For official storm warnings and additional information, please visit the National Hurricane Center. Click here to view a high resolution image. Image Above:Tropical Storm Dennis The swirling clouds of Tropical Storm Dennis span from the northern tip of Venezuela to the southern half of the island of Hispaniola in this Moderate Resolution Imaging Spectroradiometer (MODIS) image. NASA’s Terra satellite captured this image on July 6, 2005, at 10:05 a.m. local time (15:05 UTC) when Dennis was building to winds of 110 kilometers per hour (70 mph). The storm is moving northeast across the Caribbean and should pass between the eastern arm of Haiti and Jamaica, hammering both with four to eight inches of rain. The National Hurricane Center predicts that Dennis may become a major hurricane—Category 3 or higher—by July 8. Click Here to view a high resolution image.
https://www.nasa.gov/vision/earth/lookingatearth/h2005_dennis.html
Representatives of European and Mediterranean institutions, public authorities, scientific and research organisations, protected area managers and conservationists gathered in Brussels on 4-5 December to support a holistic approach to enhancing and protecting Mediterranean biodiversity. This two-day gathering was organised through the Interreg Med Biodiversity Protection Community. This community has been developed through the PANACeA project which involves the CPMR as a partner. As part of the gathering, a Public Hearing was held on 5 December at the European Parliament on ‘Ecosystems in Danger: Enhancing EU Policy response’, in partnership with the SEArica Intergroup and the Interreg Med Biodiversity Protection Community. Prominent among the discussions was the need for urgent action to ensure ecosystem connectivity and regulating socio-economic activities. There was also a call for science-based management in key ecologically significant areas suffering from uncontrolled human activities inside and outside protected areas. Gesine MEISSNER, Member of the European Parliament and Chair of the SEArica Intergroup, stated: “We need more resources of the sea, blue biotechnology, aquaculture, blue energy, shipping; on the other side, we need marine protected areas and we are having a delay in reaching the right percentage”. She expressed her support for the hearing’s joint ‘Declaration on Ecosystem-based approaches for biodiversity protection and management’, which was also signed by the other MEPs who took part in the discussions: Marco Affronte, Davor Škrlec and Francesc Gambus. Sergi TUDELA, of the Directorate-General for Fisheries and Maritime Affairs at the Ministry of Agriculture, Livestock, Fisheries and Food Government of Catalonia, also presented the maritime strategy of Catalunya which incorporates co-management as an essential tool. He stated: “Governance and cooperation are essential components in effectively addressing the challenges facing Mediterranean biodiversity”. The public hearing presented the key messages of the 12 projects and other collaborators in the Med Biodiversity Protection Community, whose partners met on 4 December for a collaborative workshop to coordinate efforts and contribute to an active open dialogue with policy institutions in the Mediterranean. The lively debate involved the European Commission’s DG Environment, DG Research, DG MARE, UNEP MAP and its Regional Activity Centres, and key project institutions active in environmental research, assessment and governance mechanisms towards biodiversity protection. The discussions were summarised by PANACeA project coordinator, Dania Abdul Malak, ETC-UMA, who called for the adoption of the Declaration as a common roadmap for two types of processes envisaged in the Mediterranean: the enforcement of proper management of protected areas through network design and best practice management, and the use of an ecosystem approach addressing ecological sensitivity and transboundary impacts outside protected areas. Mediterranean ecosystems are collapsing and new innovative approaches to managing these indispensable systems are urgently needed to halt the collapse, reduce decline in condition and allow for recovery and resilience. As stated by Ameer Abdulla, IUCN WCPA: “Biodiversity and natural resources must now be better managed within and beyond protected area boundaries and across national borders. Maritime Spatial Planning, along with Integrated Wetland Management and Integrated Coastal Zone Management are essential tools to better manage biodiversity and human use in an ecosystem-based approach.” The Declaration underlines the necessity for a long-term vision for the development of monitoring tools, harmonised methodologies, protocols and knowledge base for the successful implementation of EU directives and Mediterranean processes on the environment. This includes the EU Marine Strategy Framework Directive and the EcAp process, which still lack the development of monitoring tools for their final implementation stages. Acknowledging a shared vision in this joint Declaration, the meetings in Brussels fostered a stronger visibility and commitment by European and Mediterranean policy-makers, scientists, protected area managers and environmental actors to enter into a deeper and longer-term collaboration. An ecosystem-based dialogue between science, policy and management involving socio-economic and co-responsibility schemes were recognised by participants as a prerequisite to better decision-making processes that ensure the long-term viability of our ecosystems and natural resources and our Mediterranean societies that depend on them. For more information and photos: Press contact: Lise Guennal +33 6 76 59 12 86 Public Hearing EU Parliament- access to presentations – 5 December 2018 Enhancing EU policies with ecosystem based approaches – access to presentations – 4 December 2018 Sonsoles San Roman Further information: The objective of PANACeA is to streamline networking and management efforts in Mediterranean Protected Areas (PAs) as a mechanism to enhance nature conservation and protection in the region. The project aims at ensuring synergies between relevant Mediterranean stakeholders – including managers, policy-makers, socio-economic actors, civil society and the scientific community –, and to increase the visibility and impacts of their projects’ results towards common identified strategical targets. PANACeA builds a community of nature conservation stakeholders in the Mediterranean and acts as the Communication and Capitalisation instrument of the projects dealing with protection of biodiversity and natural ecosystems. Through its tool, the Mediterranean Biodiversity Protection Knowledge Platform, PANACeA ensures the transfer of synthesised projects’ outcomes and their dissemination across and beyond the region. The main thematic focus areas include coastal and marine management, biodiversity monitoring, sustainable use of natural resources, management of protected areas, global changes, governance and cooperation and scientific and innovative methodologies., under the umbrella of a series of projects: ACT4LITTER (marine litter in marine protected areas) and additional projects to be included in the coming period, AMAre (marine spatial planning and PAs), CONFISH (network of fish stock recovery areas), ECOSUSTAIN (water quality monitoring solutions in protected wetlands), FishMPABlue2 (governance of artisanal fisheries in PAs), MEDSEALITTER (marine waste management), MPA-ADAPT (adaptation of MPAs to climate change), POSBEMED (strategy for joint management of Posidonia beaches and dunes), WETNET (wetland governance), PLASTIC BUSTERS MPAs (Marine Litter in MPAs) and PHAROS4MPAs (Blue Economy and Marine Conservation).
https://cpmr.org/maritime/protected-areas-are-not-enough-cpmr-supports-ecoregional-approach-to-biodiversity-protection/20092/
ACB joins the Asian initiative on ecosystem restoration The forests of Asia are of immense ecological, social and economic importance, covering 549 million hectares, or 14 percent of the total global coverage. The area provides vital ecosystem services and protection from climate impacts for 4.5 billion people living in the region. These ecosystems contribute to the spiritual, cultural and physical well-being of people in Asia and the Pacific. With increasing pressures on biodiversity in recent years, the conservation of vital habitats and ecosystems has become an urgent priority. Executive Director Dr. Theresa Mundita S. Lim of the ASEAN Center for Biodiversity (ACB) said the economic benefits arising from the sustainable use of biological resources are essential to the overall stability of ASEAN. “Disruptions to these vital ecological processes can therefore have substantial or even serious impacts affecting the safety, health and well-being of individuals and communities,” Lim said. ACB joined the ‘International Symposium on Ecosystem Restoration for Green Asia and Peace’, an online event held on August 18 that aims to network among forest-related institutions in the Asian region, policy makers and international organizations. The symposium was organized by the Korea Society of Forestry Sciences and the Institutes of Biosciences and Green Technologies at Seoul National University. He highlighted success stories and lessons learned from ecosystem restoration projects or programs in Asia, including regional organizations, such as ACB and the Asian Forestry Cooperation Organization (AFoCO), who shared and discussed their respective greening strategies. These reforestation initiatives contribute to the United Nations Decade for Ecosystem Restoration, a global call to rehabilitate and restore the world’s vulnerable ecosystems. ASEAN Green Initiative Among Asean’s responses to the global call for ecosystem restoration is the Asean Green Initiative (AGI), which was launched on August 6. Led by ACB and the Asean Secretariat, the AGI aims to recognize the best ecosystem restoration activities in the region that focus on a holistic and participatory approach in the regeneration and conservation of vital ecosystems and habitats. for wildlife. The initiative encourages the planting of at least 10 million native tree species in the 10 ASEAN Member States (AMS) over a period of 10 years – or 10.10.10 – in harmony with the Decade of Nations United for Ecosystem Restoration. “The 10.10.10 target is just the start of a collective greening movement in the region, and even beyond,” Lim said. She stressed that “meaningful collaboration and cooperation among development and dialogue partners” are essential to intensify regeneration and restoration efforts. Establish all over Asia After the symposium, ACB and AFoCO met to discuss common areas of collaboration. Capacity development for forestry and biodiversity conservation, mapping of degraded ecosystems and promotion of AGI were among the steps identified during the meeting. The formation of a working group composed of representatives of the two regional organizations is in preparation to better flesh out the concept and plans of the future partnership. The ACB also had initial talks for a possible partnership with the Republic of Korea, particularly in the area of coastal and marine conservation. Lim said Korea’s green growth policies could be synchronized with ACB’s efforts to mainstream biodiversity into various sectors, including business, industries and finance. During the symposium, she pointed out that pro-nature prospects and processes in the economic and financial sectors would alleviate the pressure of land use expansion and conversion which has a huge impact on large areas. forests and other vital ecosystems. Restoring ecosystems is a massive global endeavor that would take a holistic approach to society. Thus, cultivating these partnerships and forging a solid cooperation within and beyond ASEAN is essential to rebuild better and more ecologically. In addition to ACB and AFoCO, other regional organizations, such as the Center for International Forestry Research and the Mekong Institute, attended the international symposium, as well as resource persons from Cambodia, Indonesia, Korea from the South, the Philippines, Mongolia, Vietnam and Uzbekistan who also shared their respective greening strategies.
https://birdlifemed.org/acb-joins-the-asian-initiative-on-ecosystem-restoration/
Understanding and valuing coastal and marine biodiversity and ecosystems services Studies estimate there may be 0.7 to 1.0 million eukaryotic marine species, of which about only 226,000 are described. The EEA State of Nature Report 2013-2018 found a general lack of marine species data that hampers the elaboration of conservation and restoration measures, the sustainable management of ecosystems and, therefore, the achievement of favourable conservation status. For instance, invertebrates supporting the lower level of the food chain or marine mammals are among those species with the highest proportion of unknown assessments (over 78 %). In the deep sea, over 90% of the species may be new to science. Additionally, very little is known about the effects of modern biogenic structures related to feeding types and morphological traits that may play a major role in biogeochemical cycles. Marine biodiversity hotspots in tropical and subtropical shallow areas host species and processes that are yet undescribed. The lack of biodiversity knowledge and appropriate monitoring are critical limiting factors in the definition and implementation of measures, where the range, population size and suitable habitat area are unknown in the majority of Member States and for the majority of vulnerable marine species and ecosystems. The main reasons are the limited access and high cost of explorations of the diversity of biotopes in the vast marine and coastal realm, in particular the deep sea, and the resources available to identify organisms across the full range of sizes (from microorganisms to megafauna). Acidification, deoxygenation, global warming and climate change, including seasonal patterns, are affecting marine ecosystems faster than terrestrial ecosystems, with their cumulative and long term effects amplifying the unprecedented pressures of the rapidly evolving ocean economy, driven by human needs for food, energy, transportation and recreation, as underlined by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) Global Assessment Report on Biodiversity and Ecosystem Services (IPBES GA, 2019). The effects have been documented on mobile and habitat building species over the past two decades and reveal an accelerating trend (IPBES GA, IPCC 2019). Many marine species are highly mobile, often migratory and rely on a number of different habitats throughout their developmental stages. In addition, the marine realm hosts numerous species for which sex determination is dependent upon environmental conditions such as temperature, seasonal patterns, and other geochemical parameters. For these species, environmental changes may cause different responses and effects on species populations and related ecosystems functions that are not shown when studying the species indiscriminately of sex and population dynamics. With so much still unknown, ecosystem processes cannot be fully understood. This weakens models of marine ecosystems and their responses to pressures and diminishes our capacity to predict and take the best measures. Since biodiversity is declining at an unprecedented rate in Earth’s history, there is an urgent need to take conservation measures and develop holistic ecosystem-based management approaches, so that these ecosystems become resilient to environmental changes and are able to provide services for humankind and the planet’s life support system. For this, it is critical to improve the knowledge and understand and model marine biodiversity as soon as possible. Proposals should address all of the following aspects: - Increase understanding of the dynamics of marine biodiversity and ecosystems processes and functioning (including primary production, food webs and biogeochemical cycles) in Europe, in its outermost regions and overseas countries and territories, whose participation is encouraged, and in areas beyond national jurisdictions. Ensure that new modelling and scenario approaches integrate new and existing biodiversity data and knowledge from other EU, international and national projects and from long-term ecosystem and socio-ecological research infrastructure on species, biotopes and ecosystem processes.Genomics and taxonomic technologies for the inventory and fast identification of marine species from microbes, plankton and invertebrates to migratory species (including diadromous species), apex predators such as sharks and mammals, corals and other habitat building species, generating reference datasets from identified voucher specimens and novel methods to improve biodiversity monitoring and inventory. - Increase understanding of how input from freshwater and estuarine systems influence coastal marine communities and their ecosystem functionality. - Use acoustic and non-invasive monitoring as an integral component of any marine ecosystem exploration and assessment. - Develop methods and indicators for regular and timely integrated assessments of the state / health of marine biodiversity and its key ecosystem services, in the EU and associated countries’ marine waters (Good Environmental Status) and in areas beyond national jurisdiction. - Contribute to the Global Taxonomy Initiative of the Convention on Biological Diversity (CBD) and to free and open access to the Global Biodiversity Information Facility’s biodiversity data. - Identify opportunities for cooperation with relevant projects, such as EUROPABON[[https://europabon.org/]], which was awarded funding under the call ‘SC5-33-2020: Monitoring ecosystems through research, innovation and technology’, or the projects resulting from topics under the Heading ‘Understanding biodiversity decline’"" in Destination ‘Biodiversity and ecosystem services’ as well as topics from Destination ‘Fair, healthy and environmentally-friendly food systems from primary production to consumption’ (aquaculture, fisheries), Destination ‘Circular economy and bioeconomy sectors’ (biotechnologies, microbiome), Destination ‘Land, ocean and water for climate action’ (Carbon cycle and natural processes) and Destination ‘Innovative governance, environmental observations and digital solutions in support of the Green Deal’ (environmental observation). Cooperation is also expected with the Biodiversity Partnership[[https://www.biodiversa.org/1759]] (HORIZON-CL6-2021-BIODIV-02-01) and other relevant Horizon Europe missions and partnerships. Proposals should outline a plan on how they intend to collaborate with other projects selected and with the initiatives mentioned, by e.g. participating in joint activities, workshops, common communication and dissemination activities, etc. Applicants should allocate the necessary budget to cover the plan. The plan’s relevant activities will be set out and carried out in close co-operation with the relevant Commission departments, ensuring coherence with related policy initiatives. - Where relevant, create links, contributing to and using the information and data of the European Earth observation programme Copernicus, the Group on Earth Observations (GEO) and the Global Earth Observation System of Systems (GEOSS), the European Space Agency’s Earth Observation Programme and in particular the flagship actions on biodiversity and ocean health of the EC-ESA Joint Earth system science initiative, is expected. - Improve professional skills and competences on marine taxonomy and system thinking. - Engage in cooperation with the EC Knowledge Centre for Biodiversity[[The EC Knowledge Centre for Biodiversity (KCBD) is an action of the EU biodiversity strategy for 2030. It aims to enhance the knowledge base, facilitate its sharing and foster cross-sectorial policy dialogue for EU policy making in biodiversity and related fields. https://knowledge4policy.ec.europa.eu/biodiversity_en. ]] and other relevant existing platforms and information sharing mechanisms[[BISE, BiodivERsA, Oppla, NetworkNature and their joint work streams.]]. - Contribute through education and training (school & ocean literacy, art and citizen science platforms) to a greater overall societal and public understanding of the link between biodiversity and the functioning of ecosystems. To achieve the expected outcomes, international cooperation is strongly encouraged.
https://cordis.europa.eu/programme/id/HORIZON_HORIZON-CL6-2021-BIODIV-01-03
Source: Project Manager: Project Number: 320812 Application Type: Project Period: 2021 - 2024 Funding received from: Organisation: / Location: Subject Fields: Renewable energy is one of the most conflictual areas in Norwegian climate and environmental debate today and include central themes such as definitions of power and Injustice in distribution of pros and cons. Recent studies in wind power development indicates that the approval procedures are perceived as opaque and undemocratic. Simultaneously, are principles for distributions from the produced values different than those known from the hydropower and oil sector that where a decisive factor in the development of the Norwegian well fair state. Local ownership and anchorage can improve local acceptance of renewables, which are a cornerstone in the green energy transition. However, strong local and regional ownership models cannot be allowed to compromise our ability to meet the need for an intact ecosystem, biodiversity and considerations to Sami reindeer herding. The research project CIVIC Renewables will investigate how new types of procedures, organisations and ownership models can contribute better processes in development of renewable projects. The overall problem is whether new types of local collaborative processes can ameliorate conflicts in connection with development of renewable energy projects. Purpose: -Analyse selected cases in Norway and Denmark focusing on innovative solutions for development of renewables and principles for distribution of pros and cons. -Investigate how good practice from the case studies can be repeated in wider contexts (3-5 test cases). -Assess how the renewable energy transitions can be optimised to obtain appropriate professionality and utilise synergies through a certain standardisation of these processes. -Foresight methods will be used to engage key actors in an open-minded dialogue of future options and challenges in green energy transition. The aim is to illustrate different development paths and policy means that considers future uncertainties. Climate change, environmental pollution, and growing demands for food, fibre and energy are placing intense pressure on land and resources. The development of socially, economically, and environmentally sustainable renewable energy (RE) production is central to releasing these pressures and enabling a transition towards a low-emission society. Wind power offers huge opportunities to realize these goals with significant untapped potential. Norway has seen a major increase in both number of wind farms under-construction and planning applications for new developments. However, these efforts to expand wind power have become flashpoints of conflict between local municipalities, rural communities, central planning authorities and RE developers. Wind power projects in Norway have been criticised for their various negative impacts related to biodiversity, and other ecosystem services, loss of indigenous Sámi culture, fragmentation of important reindeer grazing areas, and loss of scenic landscapes. In addition, there is a substantial and research-based critique of limited transparency of the planning and licencing processes, as well as of the actual contribution of wind energy to solve the climate challenge. Efforts to dampen these various tensions have failed. Previous research has shown that participatory processes focusing on civic energy involvement and principles of nature resource rent have a high potential to solve these challenges but are non-existent in Norway’s wind power planning approach. Thus, the aim of CIVIC Renewables is to identify and evaluate local solutions to civic energy development that apply these principles and develop -with an interdisciplinary team of researchers and external RE stakeholders - a civic RE planning framework for wind energy development that will be robust in diverse local contexts, for different RE technologies and structures in Norwegian municipalities to navigate conflicts and safeguard natural environments.
https://prosjektbanken.forskningsradet.no/en/project/FORISS/320812?Kilde=FORISS&distribution=Ar&chart=bar&calcType=funding&Sprak=no&sortBy=date&sortOrder=desc&resultCount=30&offset=0&Departement=Samferdselsdepartementet
This project addresses the significant need identified in the NARP to review agility of conservation governance and management. The likely effects of human-induced climate change on marine biodiversity raise questions about adaptive capacity of current governance and management systems and their ability to support the resilience of marine biota. Governance directly influences whether resilience is undermined, preserved or strengthened (McCook et al. 2007). As noted in a 2009 House of Representatives Standing Committee report: “Given the projected severe impacts on the coastal zone from climate change … and the urgent need for adaptation strategies and resilience building, any hesitation in addressing the issues concerning governance arrangements for the coastal zone could have severe consequences”. Furthermore, the “cornerstone of future success is an adaptive governance structure in which ecosystem management understanding is operationalized in day-to-day activities” (Barnes & McFadden 2008, p. 391). These conclusions point to a need for coherent and adaptive systems of marine biodiversity governance, planning and management. By providing understandings and strategies for this ‘future success’, we will answer the following high and medium priority NARP questions: 1. How should conservation managers and planners adapt their practices to ameliorate climate change risks and enhance adaptation? 2. What intervention strategies addressing nature conservation outcomes will increase system resilience? 3. How will governance for the conservation of marine biodiversity need to change to adapt to climate change impacts? 4. What are the barriers to implementing adaptation and effective policy responses? The project will engage with conservation planning instituted under the National Oceans Policy, examining institutional governance, decision-making processes and types of instruments being deployed. Our research also addresses priorities established in state strategies – in NSW for example, the discussion paper on a new biodiversity strategy identified a need to refine adaptation planning and integrated management of marine reserves. 1. To identify the requirements for adaptive marine biodiversity conservation governance and management in the context of climate change 2. To assess how well current regimes, with a particular focus on marine protected areas, meet these requirements, and determine any necessary changes 3. To identify alternatives to current regimes that are likely to enhance adaptivity and assess their governance and management effectiveness 4. To offer advice to governance and management authorities on how regime reform might be achieved There is limited capacity in this type of project to generate immediate and demonstrable outcomes. We can only identify influences on ongoing processes as indicators of potential future outcomes. We identified requirements for adaptive marine biodiversity conservation governance in the context of climate change. These requirements have influenced how governing agency personnel think about governance design. Developing ‘best practice’ adaptive governance requirements has provided a benchmark that can be used to assess current arrangements and support their reform. The NSW Marine Estate process and Tasmania’s Draft Natural Heritage Strategy have drawn on the project’s research workshops and reports. Proposals for changes to current arrangements have been judged by government agency staff as likely to enhance adaptive capacity, and thereby enhance marine biodiversity conservation outcomes. We have received positive responses to our academic publications arising from the research, with several colleagues indicating that our work has influenced their thinking about adaptive governance and governance assessment methods.We expect the influence of our work will continue to be evident, particularly as windows of opportunity for adopting our proposals arise, and as our findings are communicated through our recently-prepared policy advisory notes.
https://frdc.com.au/project?id=698
- Share: Goal 2 – A diverse environment interconnected by biodiversity corridors A diverse environment interconnected by biodiversity corridors The South East and Tablelands includes the alpine environment of Australia’s highest mountains, the State’s only wilderness coastline, rural landscapes and national parks. It is home to more than 100 threatened plant species, 112 threatened animal species and 13 endangered ecological communities.13 Biodiversity corridors help to connect plants and animals throughout the region, into and out of the ACT and beyond to Victoria. They form part of a national wildlife corridor extending from Victoria to Far North Queensland.14 A strategic approach on public and private lands will protect and manage natural ecosystems and connect habitats. The region includes coastal lakes and lagoons, coastal wetlands, sensitive estuaries and the protected waters of the South Coast, where 57 estuaries represent almost one-third of those in the State. The Batemans Bay Marine Park showcases distinctive marine life and provides opportunities for the scientific study of marine biodiversity in a relatively natural state.15 The environmental, social and economic values of these landscapes underpin the region’s character. These values can be affected by over-extraction of water, contamination, sea level rise and storm surge, and conflicting land uses such as urban expansion. Protecting the environment and building greater resilience to natural hazards and climate change will ensure these values are enjoyed by future generations. Sensitive estuaries in the South East and Tablelands Eurobodalla Local Government Area: Bengello Creek, Bullengella Lake, Coila Lake, Congo Creek, Corunna Lake, Cullendulla Creek, Durras Creek, Kellys Lake, Kianga Lake, Brou Lake, Lake Brunderee, Mummuga Lake, Lake Tarourga, Little Lake, Maloneys Creek, Meringo Creek, Nangudga Lake, Saltwater Creek and Tilba Tilba Lake. Bega Valley Local Government Area: Back Lagoon, Baragoot Lake, Bournda Lagoon, Boydtown Creek, Bunga Lagoon, Curalo Lagoon, Cuttagee Lake, Fisheries Creek, Merrica River, Middle Lagoon, Nadgee Lake, Nadgee River, Nullica River, Saltwater Creek, Shadrachs Creek, Table Creek, Wallaga Lake, Wallagoot Lake and Woodburn Creek. Criteria for mapping high environmental value lands Lands with potential high environmental value include: - existing conservation areas such as national parks and reserves, declared wilderness areas, marine estates, Crown reserves dedicated for environmental protection and conservation, and flora reserves; - threatened ecological communities and key habitats, and important vegetation areas; - important wetlands, coastal lakes and estuaries; and - sites of geological significance. High environmental value mapping aims to provide a regional overview for strategic planning. Planning authorities should obtain the most recent spatial data from the Office of Environment and Heritage when considering proposals for land use change or intensification. Up-to-date mapping can be found at http://www.seed.nsw.gov.au/. Validation rules for identification of high environmental value lands are found at www.environment.nsw.gov.au. High environmental value lands and the region’s networks of biodiversity corridors are mapped in the map below. These areas provide diversity and habitat for flora and fauna, including significant koala populations in the Snowy Monaro and Wingecarribee local government areas. Criteria developed by the Office of Environment and Heritage to map lands with high environmental value is detailed on the previous page. Groundwater-dependent ecosystems and aquatic habitats associated with rivers, streams, lakes, estuaries and coastal waters that may not have been included in this mapping also have high environmental value. Maps of these areas are available on the Department of Primary Industries website. The intensification of land uses through urban development and other activities must avoid impacts on important terrestrial and aquatic habitats and on water quality. Mapping areas of potential high environmental value will inform local planning strategies and local environmental plans. The ‘avoid, minimise and offset’ hierarchy will be applied to areas identified for new or more intensive development. The hierarchy requires that development avoid areas of validated high environmental value and considers appropriate offsets or other mitigation measures for unavoidable impacts. Where it is not possible to avoid impacts, councils will be required to consider how impacts can be managed or offset through planning controls or other environmental management mechanisms. Sensitive estuaries have been mapped as part of the region’s high environmental value lands. These estuaries and their catchments are particularly susceptible to the effect of land use development and are not suitable for intense uses such as housing subdivision. Travelling Stock Reserves move livestock and supplement land for grazing in times of drought. These reserves can contain significant biodiversity values and need to be carefully managed. Environmental Assets Actions 14.1 Develop and implement a comprehensive Koala Plan of Management for the Snowy Monaro and Wingecarribee local government areas. 14.2 Protect the validated high environmental value lands in local environmental plans. 14.3 Minimise potential impacts arising from development on areas of high environmental value, including groundwater-dependent ecosystems and aquatic habitats, and implement the ‘avoid, minimise and offset’ hierarchy. 14.4 Improve the quality of and access to information relating to land with identified high environmental values. 14.5 Support planning authorities to undertake strategic, landscape-scale assessments of biodiversity and areas of high environmental value. 14.6 Protect Travelling Stock Reserves in local strategies. Regional biodiversity corridors are native vegetation links within a region, between regions or between significant biodiversity features. They expand and link different habitats and are critical to long-term ecological connections, particularly in the context of long-term climate change. Regional biodiversity corridors form part of the Great Eastern Ranges Initiative, to which the NSW Government is a partner. The initiative identifies biodiversity corridors across the continent, from the Grampians in Western Victoria to the wet tropics of Far North Queensland.16 Land uses within regional biodiversity corridors should maintain and, where possible, enhance ecological connectivity. Protecting sensitive urban lands on the South Coast The NSW Government’s South Coast Sensitive Urban Lands Panel Review provides advice on planning outcomes for potential development sites in sensitive coastal locations on the South Coast (Long Beach, Malua Bay, Rosedale, Moruya Heads, Narooma South, Wallaga Lake, Bega South and West, Wolumla, Tathra River and Lake Merimbula).17 The Panel’s recommendations are incorporated into planning for all sites and will continue to be considered for future land use planning decisions to protect and conserve sensitive coastal locations. Actions 15.1 Protect and enhance the function and resilience of biodiversity corridors in local strategies. 15.2 Improve planning authority access to regional biodiversity corridor mapping and methodology. 15.3 Confirm and validate the location and boundaries of regional biodiversity corridors. 15.4 Focus offsets from approved developments to regional biodiversity corridors, where possible. Most people live near areas subject to natural hazards. The appeal of these places is obvious, however, they may also come with challenges, such as flooding and bushfires. Flooding is predicted to occur more frequently and with greater intensity in the future. Planning for new urban release areas and infill areas must consider the impact of climate change, including sea level rise, on flooding. Councils are primarily responsible for flood risk management through the development and implementation of floodplain risk management plans. These plans are prepared in consultation with the local community and relevant agencies. They incorporate up-to-date information on regional climate projections and related impacts, and prioritise resilience to climate change in the siting and development of infrastructure and land uses. The impacts of rising sea levels and climate change will be critical to managing coastal and floodplain risks. Relevant councils will need coastal zone management plans and associated controls to deal with current and potential erosion. Other hazards, including bushfires, storms and landslips, may occur more frequently and, possibly, with greater intensity. These events may occur in areas that face development pressure. Enabling adaptation in the South East The NSW Government’s South East Integrated Regional Vulnerability Assessment (2012) identified regional climate change vulnerabilities and potential actions to reduce these vulnerabilities. The assessment laid the foundations for the Enabling Adaptation in the South East project, which starts the planning process for government service delivery to sectors most vulnerable to climate change. It sets transition pathways for tourism, regional and agricultural centres, coastal development, mixed farming, dairy farming, landscapes and ecosystems, and infrastructure. Wingecarribee Local Government Area will be incorporated into adaptation planning for the Illawarra region. Actions 16.1 Locate development, including new urban release areas, away from areas of known high bushfire risk, flooding hazards or high coastal erosion/inundation; contaminated land; and designated waterways to reduce the community’s exposure to natural hazards. 16.2 Implement the requirements of the NSW Floodplain Development Manual by developing, updating or implementing flood studies and floodplain risk management plans. 16.3 Update coastal zone/estuary management plans and prepare new coastal management programs to identify areas affected by coastal hazards. 16.4 Incorporate the best available hazard information in local environmental plans consistent with current flood studies, flood planning levels, modelling, floodplain risk management plans and coastal zone management plans. 16.5 Update and share current information on environmental assets and natural hazards with councils to inform planning decisions. 16.6 Manage risks associated with future urban growth in flood-prone areas as well as risks to existing communities. Communities need skills and knowledge to deal with the effects of climate change. The NSW Climate Change Policy Framework and the draft Climate Change Fund Strategic Plan set policy directions and prioritise investment to reduce carbon emissions and adopt and mitigate the impacts of climate change. The South East and Tablelands is the first region in NSW to implement a regional response within government to climate change, and this process has been adopted across NSW. The opportunity to work with the ACT Government (which undertook a parallel regional adaptation planning process and set similar policy targets) will allow the region to leverage the transition to a low emissions economy and prepare for climate change. Preparedness will be enhanced by embedding emission reductions and climate change into business-as-usual planning, program delivery and governance. This will include initiatives to improve awareness of climate change impacts, strengthen natural ecosystems, safeguard public assets, support business and communities, unlock funds for communities to undertake adaptation strategies, and develop a services market to support adaptation strategies. The infrastructure built today must consider the climate projections for the near future and, in some cases, the far future. Building community capacity to deliver and own renewable energy, promoting the use of advanced technology vehicles, identifying low emission pathways for energy-intensive industries and improving access to start-up funding to accelerate innovation will help to reduce emissions and minimise energy consumption. Actions 17.1 Enhance government service delivery and implement local initiatives to address climate change impacts on local communities. 17.2 Collaborate with the ACT Government to reduce emissions and adopt adaptation strategies. 17.3 Support councils to assess and respond to impacts and opportunities associated with a changing climate. 17.4 Help communities and businesses to understand and respond to climate related risks and opportunities by providing climate information, building capacity and unlocking financial mechanisms to help fund emission reductions and climate adaptation. The future growth and development of the region, coupled with the uncertainties of drought and climate change, mean that long-term planning for water supply must be integrated into strategic planning. This planning must also consider the region as a source of potable water for Sydney. In some areas, such as the Wingecarribee Local Government Area, water supply is comparatively secure – although much of Wingecarribee’s water resources flow north towards Sydney. Goulburn- Mulwaree Local Government Area has enhanced its water supply through the construction of an emergency pipeline from the Wingecarribee Reservoir. Hilltops Local Government Area includes areas that need to secure a sustainable water source for urban use, while the Yass Valley and Upper Lachlan local government areas face water security issues that are intensified by a changing climate. Eurobodalla Local Government Area can secure water resources by improving storage and reticulation to meet growth and environmental outcomes. An acceptable reticulated water supply is required for any new land release or an increase in housing densities in existing areas. The provision of potable water must conform to the following water planning principles: - a reliable supply to provide certainty for consumers (both residential and other); - an affordable water supply in terms of both capital and recurring costs; and - a quality of supply that meets relevant health standards. In some areas, including Hilltops, Goulburn-Mulwaree and Upper Lachlan local government areas, securing an ongoing water supply for agricultural industries will bring economic opportunities. Parts of the region are covered by the Australian Government’s Murray-Darling Basin Plan (2012) which sets out regional water use at environmentally sustainable levels by determining long-term ‘average sustainable diversion limits’. This is implemented through water sharing plans that include rules for managing extractions and licence holders, accounts, as well as water trading. Changes in water demand from different uses may require water to be reallocated over time. Protecting the Sydney Drinking Water Catchment Part of the region is located in the Sydney Drinking Water Catchment, which supplies drinking water for almost 60 per cent of the State’s population.18 Protecting water quality and quantity in this catchment is essential for the health and security of communities in the region and Greater Sydney. Rigorous planning and development controls apply to proposals within the Sydney Drinking Water Catchment including: - State Environmental Planning Policy (Sydney Drinking Water Catchment) 2011; - local planning direction 5.2 Sydney Drinking Water Catchments, issued under Section 9.1(2) of the Environmental Planning and Assessment Act 1979; - the Water NSW Act 2014 and the Water NSW Regulation 2013; and - the Water Management Act 2000. Under the Water NSW Act 2014 and Water NSW Regulation 2013, land has been declared as parts of the Metropolitan, Woronora and Shoalhaven special areas, which are critical in protecting water quality in the storages. The NSW Government has also announced the cancellation and buy-back of all petroleum exploration licences covering the Sydney Drinking Water Catchment, including the special areas.19 Actions 18.1 Locate, design, construct and manage new developments to minimise impacts on water catchments, including downstream impacts and groundwater sources. 18.2 Finalise water resource plans for rivers and groundwater systems as part of the Murray-Darling Basin Plan and implement water sharing plans. 18.3 Prepare or review integrated water cycle management strategies to ascertain long-term infrastructure needs to accommodate population growth. 18.4 Incorporate water sensitive urban design into development that is likely to impact water catchments, water quality and flows.
https://www.planning.nsw.gov.au/Plans-for-your-area/Regional-Plans/South-East-and-Tablelands/South-East-and-Tablelands-regional-plan/A-diverse-environment-interconnected-by-biodiversity-corridors
Alexey Zakharov joined the NCATS informatics group in 2015. Currently, he serves as the informatics leader for an early therapeutic discovery project team and manages and coordinates the team’s efforts to achieve project deliverables and milestones. Zakharov also manages the data science needs of early drug discovery projects, analyzing screening data, identifying chemical series for lead optimization and follow-up studies, and developing and applying artificial intelligence (AI) and modern machine learning approaches for therapeutic programs. He oversees the proper application of cheminformatics techniques and targeted molecular design to deliver compounds with improved properties, having better selectivity and safety margins. Zakharov also works on a variety of projects focused on cancer and cardiovascular, neurological, viral and inflammatory diseases. Before joining NCATS, Zakharov worked in the Chemical Biology Laboratory at the National Cancer Institute (NCI), where he strengthened his expertise in the cheminformatics field, applying and developing in silico methods to aid in the drug discovery and design projects of the Computer-Aided Drug Design Group. His work focused on developing computer-based techniques for predicting the toxicity, metabolism and anti-cancer activities of chemical compounds. Prior to his tenure at NCI, Zakharov was a scientist in the Structure-Function Based Drug Design Laboratory at the Institute of Biomedical Chemistry of the Russian Academy of Medical Sciences, where he worked on developing computer methods for quantitative structure-activity relationships (QSAR) modeling to predict the biological activity of new chemical compounds. He also conducted independent computational drug discovery research, including target identification, hit finding and optimization. Zakharov graduated with a master’s degree in biochemistry from Russian State Medical University, Moscow, Russia, in 2005. He received his doctorate in bioinformatics from the Institute of Biomedical Chemistry of the Russian Academy of Medical Sciences in 2008. Research Topics Zakharov’s research is based on the strong overlap between different scientific fields, including bioinformatics, cheminformatics, biochemistry, computational chemistry and toxicology, statistics, and AI/machine learning. His main research interests are concentrated on computer-aided drug design (anti-cancer and anti-viral activity, selectivity to certain kinase and/or G-protein coupled receptors, etc.) and the development of innovative methodologies and applications in computational modeling. Problems of particular interest include novel approaches to modeling imbalanced datasets; implementation and usage of biological descriptors, as well as descriptor-free techniques; modeling of nanoparticles; new applications of deep learning and AI approaches; risk assessment of potential compound toxicity; system biology problems (modeling protein-protein interactions, pathway analysis); and computational chemistry modeling (predicting chemical reactions, reaction routes and yields).
https://ncats.nih.gov/staff/zakharovav
About this event Join us for our latest 3-part webinar series Biodata and AI in Drug Discovery to uncover how experts organise data into biological networks and knowledge graphs, and how they refashion existing drug discovery pipelines with the help of AI & ML. Webinar 1: AI and machine learning for drug development Wednesday 19 October at 3pm BST/ 4pm CET/10am EST AI can impact all stages of drug development, starting from the discovery of new molecules to understanding the mechanism of action to identifying the patients that will benefit most from these drugs. This webinar will discuss where we are in the hype cycle when it comes to AI in drug discovery and what the next wave of drug discovery pipeline look like. Machine Learning in Drug discovery: Use cases- Abhishek Pandey, Group Lead: Pharma Discovery, AbbVie Artificial Intelligence in Drug Discovery 2022: Aspects of Validation, Data and Where We Are on the Hype Cycle - Andreas Bender, Professor of Molecular Informatics, University of Cambridge Methods that imitate artificial intelligence - David Raunig, Senior Director Statistics, Takeda Webinar 2: Harnessing biodata for drug discovery Wednesday 26 October at 3pm BST/ 4pm CET/ 10am EST Data sets are too large for traditional databases to capture, manage and process and for people to visualise. To effectively conceptualise biodata, one needs to represent them in networks. This webinar will discuss how we leverage biodata sets to advance drug discovery. Network based medicine - John Quackenbush, Professor of Computational Biology and Bioinformatics and Chair of the Department of Biostatistics, Harvard T.H. Chan School of Public Health Taking advantage of 3D protein ligand information in AI-driven generative compound design methods - Uli Schmitz, Executive Director Structural Chemistry, Gilead Sciences Webinar 3: Knowledge graphs for drug discovery Wednesday 2 November at 3pm BST/ 4pm CET/ 10am EST This webinar will discuss how we can use knowledge graphs and AI to further the field of drug discovery and development. Toward a better understanding of adverse events using knowledge graphs - Peter Henstock, Machine Learning & AI Technical Lead: Combine AI, Software Engineering, Statistics & Visualization, Pfizer Maze Therapeutics applies a genetics knowledge graph to accelerate drug discovery - Nolan Nichols, Senior Software Engineer (Bioinformatics), Maze Therapeutics Hosted by Front Line Genomics is a genomics-focused media company, with a social mission to deliver the benefits of genomics to patients faster. We organise the Festival of Genomics, digital events and webinars. We also produce reports and operate a content-rich website.
https://app.livestorm.co/front-line-genomics/biodata-and-ai-in-drug-discovery-webinar-1-ai-and-machine-learning-for-drug-development
In silico design and selection of CD44 antagonists. Implementation of computational methodologies for drug discovery Drug discovery (DD) is a process that aims to identify drug candidates through a thorough evaluation of the biological activity of small molecules or biomolecules. Computational strategies (CS) are now necessary tools for speeding up DD. Chapter 1 describes the use of CS throughout the DD process, from the early stages of drug design to the use of artificial intelligence for the de novo design of therapeutic molecules. Chapter 2 describes an in-silico workflow for identifying potential high-affinity CD44 antagonists, ranging from structural analysis of the target to the analysis of ligand-protein interactions and molecular dynamics (MD). In Chapter 3, we tested the shape-guided algorithm on a dataset of macrocycles, identifying the characteristics that need to be improved for the development of new tools for macrocycle sampling and design. In Chapter 4, we describe a detailed reverse docking protocol for identifying potential 4-hydroxycoumarin (4-HC) targets. The strategy described in this chapter is easily transferable to other compounds and protein datasets for overcoming bottlenecks in molecular docking protocols, particularly reverse docking approaches. Finally, Chapter 5 shows how computational methods and experimental results can be used to repurpose compounds as potential COVID-19 treatments. According to our findings, the HCV drug boceprevir could be clinically tested or used as a lead molecule to develop compounds that target COVID-19 or other coronaviral infections. These chapters, in summary, demonstrate the importance, application, limitations, and future of computational methods in the state-of-the-art drug design process.
https://www.rug.nl/research/grip/news/agenda-items/ruizmoreno
In 2004, the FDA issued Botanical Drug Development Guidance for Industry; In 2016, 2nd edition. If a drug product is derived from single plant or from mix of plants, the Guidance waives the requirement of: - identifying active ingredients - description of mechanism of action As long as a botanical product demonstrates safety and efficacy during phase I, phase II, and phase III clinical trials, it is approved as a prescription drug on the US market. Two botanical drugs are currently on the US market: - Veregen (sinecatechins), a topical drug for the treatment of genital and anal warts - Fulyzaq (crofelemer), an oral drug for control and symptomatic relief of diarrhea in patients with HIV/AIDS on anti-retroviral therapy There are over 600 botanical drugs in the FDA IND process. [Botanical Drug Development Pathway] Modern herbal medicines in forms of pills or capsules with demonstrated safety and efficacy on large scale population are good candidates for US prescription botanical drug development: - choosing known safe and effective herbal medicines reduces time and cost of drug discovery - demonstrated safety significantly increases success rates from pre-clinical tests up to phase-IIa - demonstrated efficacy significantly improves success rates of phase-IIb and Phase-III trials The technical challenge of developing CMC contents in FDA IND filing is tedious but doable. The US FDA encourages sponsors to develop hemp products through the botanical drug development pathway to bear medical claims.
https://www.botanicaldrug.com/
Dr. Krzysztof Rataj works at the intersection of medicinal chemistry, biology, imaging and artificial intelligence in the field of drug discovery. He obtained a PhD in Biophysics from Jagiellonian University, and worked on numerous projects in collaboration with scientific teams from Hungary, Denmark and Norway. His team explores the application of artificial intelligence approaches to various aspects of drug design and precision medicine, such as transformers for property prediction and compound optimization, natural language processing methods in protein and nucleic acid research, and computer vision in high content screening and histopathology. Their current endeavor is the marriage of chemical structure and phenotypic screening images in order to create a comprehensive approach to high content screening. AI & Data Science Showcase: Ardigen Ardigen is harnessing advanced AI methods for novel precision medicine. The company’s in-house datasets, together with advanced AI platforms, empower the development of effective therapies. Enhancing Phenotypic Drug Discovery with AI-based Methods The resurgence of phenotypic screening as a viable method of early drug discovery opens up a new field for application of AI-based methods. We would like to explore the concept of merging high content image data with chemical compound data to provide new quality in HCS pipelines. The PMWC 2022 AI Company Showcase will provide a 15-minute time slot for selected AI companies to present their latest technologies to an audience of leading investors, potential clients, and partners. We will hear from companies building technologies that expedite the pre-clinical and clinical drug discovery and development process, accelerate patient diagnosis and treatment, or develop scalable systems framework to make AI and deep/machine learning a reality.
https://www.pmwcintl.com/speaker/krzysztof-rataj_ardigen_2022sv/
Why it’s important to get the right lead molecule Research has shown that candidate molecules tend to be closely related to the lead compounds, therefore, the potential success of a new drug campaign is highly dependent on the selection of lead series very early on in the drug discovery process. Challenges in hit to lead stage Drug discovery is a costly and time-consuming process with the cost of the hit to lead stage alone estimated at 166 million USD. Identifying drug-like molecules that target significant metabolic pathways plays a crucial role in the ability of producing new medicines, however, because of the dramatic increase in the available molecule information, screening approaches have obvious limitations in search space coverage. Hit discovery technologies range from traditional high-throughput screening to fragment-based techniques, affinity selection of large libraries and computer-aided de novo design. Development of computational fragment-based approaches has contributed to the acceleration of the drug discovery process by facilitating the screening of libraries of compounds and reducing the pool of compounds for synthesis. Nostrum Biodiscovery has been contributing to those efforts by developing our own fragment growing tool. Fragment growing FragPELE can automatically grow hundreds of fragment molecules onto a docked ligand scaffold in a high-throughput manner. At each step, it runs multiple PELE simulations to efficiently sample the re-arrangement of the system as the fragment is grown. The protocol for growing ligands consists of a number of distinct steps: - Fragment linkage. At this stage, a fragment is covalently linked to the ligand core at a position defined by the user. - Fragment reduction. The parameters of the fragment atoms are reduced to later be regrown inside the binding site. - Fragment growing. During a series of steps, the fragment is grown, iteratively increasing its parameters. - Sampling and scoring. In the last step, a PELE simulation is performed to score the grown molecule based on the interaction energy between the ligand and the protein. The combination of Monte Carlo sampling with the growing algorithm used in FragPELE allows the complex to adapt while exploring the significant areas of the potential energy surface. Research at Nostrum Biodiscovery As a company specializing in drug design, we understand the importance of seemingly insignificant structural changes and the vast impact they can have on binding affinity, which is why we have been improving FragPELE to tackle some of the most common challenges in drug design. Water displacement The new implementation can be used to predict the principal hydration sites or the rearrangement and displacement of conserved water molecules upon the binding of a ligand, letting the users explore the resulting changes in free energy. Library screening Additionally, we’ve been working on an automated workflow to make your everyday hit to lead studies more bearable! You will soon be able to screen thousands of compounds from public libraries, filter them according to your requirements and assess their binding affinity. Feed Forward Frag Finally, we are developing a new workflow to iteratively perform fragment growing simulations and extract the top candidates from an external dataset based on similarity to identify the most promising lead compounds.
https://www.nostrumbiodiscovery.com/highlights/fragment-growing/
Medicinal chemistry is a specialised science that has evolved to encompass a broad range of disciplines concerned with the identification, synthesis and development of drug-like compounds for therapeutic use. It needs a wide range of expertise, developed through years of training, dedication and learning from best practices in order to produce drugs that are good enough to enter clinical trials in patients. In the early days of drug discovery, medicinal chemists often optimised and developed compounds without much knowledge of the drug target or pathway in mind. It was a largely subjective process where chemistry-driven elaboration of chemical structures was undertaken and these compounds were often tested directly in vivo to optimise the biological response without much thought of ADMET properties. New technologies have had a huge impact on drug discovery since the mid 20th century. The early influence of experimental pharmacology, which was first employed to study drug side-effects, coupled with advances in cell biochemistry led to the identification of many enzymes and receptors as new drug targets, thus enabling medicinal chemists to develop compounds to interact selectively with targets for a wide range of therapeutic areas. The advent of molecular biology and functional genomics shifted the focus to tackling disease through an understanding of pathway analysis and through this, the identification of drugable targets. Novel drugs were therefore developed, especially the original HIV protease inhibitors, using structure- based drug design. To most chemists this is by far the most intellectually satisfying part of drug discovery, but good structural knowledge of the drug target is essential since drugs are developed within the confines of an accurate binding site model and optimised through precise iterative chemical synthesis. The high-throughput era of genomic sciences heralded an alternative, more empirically random approach for drug discovery based upon a numbers game. The generation of large high-throughput screening (HTS) libraries, created by combinatorial chemistry was based upon the concept of fewer structural variations across a large number of drug-like scaffolds. However, the original templates of the early 1990s were anything but druglike, and the Lipinski rules evolved as a response to understanding their limitations. Many drug companies are now spending considerable time and effort ridding their compound collections of these particular molecules. In general, the early combichem libraries did not produce the success that was expected due to the lack of design in their construction resulting in extremely high attrition rates during pre-clinical stages. With the pressure to increase the number of drugs receiving market approval, the science of medicinal chemistry needed to change in order to address the high attrition rates in pre-clinical and early clinical earlier on in development. It is worth specifying that medicinal chemists are responsible for designing and synthesising drugs that are robust enough to enter Phase II clinical trials to test proof of concept in patients. If this is not achieved then we have failed, because we have used precious time and resources but learnt nothing of value in curing disease. However, although the predictive tools that medicinal chemists use in drug discovery have improved dramatically during the past 10 years, small molecule drug discovery is getting far more difficult to do. One of the main reasons for this paradox is that medicinal chemists now need to have early stage strategies to solve not just the typical potency, selectivity and exposure problems we encounter, but also the theoretical and idiosyncratic toxicological hazards that can potentially occur in man once the drug goes into a wider patient population. Medicinal chemistry has, therefore, grown to encompass a greater range of scientific disciplines in the drug discovery process in order to minimise the cost, time and risk of development. The arrival of newer high powered computational capabilities was one catalyst for this approach. Improved computational methods for reducing the attrition rate of compounds, due to, for example, bioavailability or toxicity issues, can be aided by virtual design and screening using specialised capabilities early in the developmental process. These capabilities mean that medicinal chemists can produce compounds with a superior starting point, providing significant cost savings during later optimisation as they have a reduced chance of failure. Many of the design strategies and tactics that we routinely employ today in drug discovery would not be recognised by scientists 10 years ago, and I think that it is safe to say that what we will be doing in 10 years’ time will have little resemblance to today’s science. Accessing new drug discovery tools and technologies, especially for predictive ADMET, can therefore provide a company with a significant advantage in today’s highly competitive environment. Computer-aided drug design: a virtual screening hit Sophisticated models True lead optimisation, with the target product profile of the clinical candidate in mind, in order to design a robust drug that is both effective and safe in a wider patient population, is one of mankind’s greatest scientific challenges. An alternative approach to the large combichem libraries (10,000 plus molecules) are smaller focused arrays of molecules (100-500 molecules) with a lower level of compound diversity but high specificity towards a drug target family. These ‘chemogenetic’ arrays are designed and developed within the confines of a particular pharmacophore model for the particular target family of interest, but with drug-like and ADMET properties in mind. This high level, knowledge-based approach to medicinal chemistry is based upon a marriage between expertise in computer-aided drug design that can propose chemical structures based upon an accurate pharmacophore and the expertise of the medicinal chemist to know what can be made. The process involves understanding how a particular ligand interacts with a diverse number of multiple receptors and then involves designing focused compound arrays to achieve drug selectivity within different but closely related families of targets. The focused arrays can be targeted towards traditional drug targets, eg enzymes, GPCRs, ion channels and nuclear receptors, however the current challenge is to develop focused arrays for particular subtypes of these broad families. The compound arrays must be synthetically feasible and, to minimise the attrition rate in preclinical testing, must also conform to specific properties regarding adsorption, distribution, metabolism, elimination and toxicity (ADMET). Designing focused arrays to encompass a diverse range of chemical space within the boundaries of the ADMET model and synthetic capabilities is now an essential first stage in compound development. The ADMET constraints established from literature mining, in vitro and in vivo studies are often compiled into computational packages to enable chemists to predict the drug-likeness of a compound. Suitable compounds are developed using a variety of methods including ligand and structurebased design, combinatorial docking and pharmacophore screening. These are developed further using lead docking analysis, induced fit docking, QM-MM methods, free-energy simulations and QSAR analysis. Many companies also now use real time in silico visualisation tools to help medicinal chemists optimise the drug like properties that are thought necessary for having optimum ADME properties. The end-point of the CADD-medicinal chemistry design process is a focused array of molecules with a superior starting point for hit to lead program. During this hit to lead phase, expertise in synthetic chemistry is critical for success. The scientific nuances and boundaries of the SAR need to be quickly understood so that informed decisions can be made as to which of the many ways forward offers the best chance of success. Most recently, newer technologies, such as microwave chemistry used in conjunction with palladium and polymer supported chemistry and parallel purification, have quickly become important methods for generating analogues. Another increasingly important skill for a medicinal chemist in the hit to lead phase is the judicious use of employing novel heterocyclic chemistry to its full advantage, as this often opens new vistas for patent novelty and offers excellent opportunities for rapid parallel synthesis while retaining drug-like properties. The greatest skill of a medicinal chemist is the ability to draw on all of these disciplines for innovative drug design and synthesis. In the race to develop novel compounds in a diverse range of therapeutic areas, highly specialised medicinal chemistry expertise is currently in great demand by biotechnology and pharmaceutical companies. Outsourcing of medicinal chemistry In this highly competitive, post genomic, target rich age, drug discovery research in multi-national pharmaceutical companies is often hampered by their lack of medicinal chemists. Typically the best leads will, of course, be pursued in-house, but the outsourcing of hit to lead or back-up programmes to a professional external medicinal chemistry provider should enable companies to move faster through both these stages and allow them to test proof of concept in patients more quickly. Hit to lead, back-up chemotype and fast follower programmes have in the past been the mainstay of medicinal chemistry outsourcing of pharmaceutical companies. From a small range of lead candidates, pharmaceutical companies would often outsource a back-up chemotype to specialised medicinal chemistry providers which in many cases superseded the in-house lead candidate. Fast follower candidates (drugs that have improved properties to those first-in-class drugs which have shown proof of concept in man using the same mechanism of action) were often outsourced to medicinal chemistry providers due to resource constraints. Because of medicinal chemistry skill shortages, multinational pharmaceutical companies need to use their own medicinal chemists as drug designers and discoverers, not as pure synthetic organic chemists. Therefore, there is a growing trend for companies to outsource synthetic chemistry projects, such as array, building block and custom synthesis. In contrast, at the other end of the scale, smaller biotechnology companies are increasingly being driven by their investors to move into drug discovery in order to leverage their proprietary biology technology, and these same companies now represent a significant proportion of the synthetic chemistry and medicinal chemistry outsourcing market. This is because biotechnology companies usually do not have the necessary knowledge or resources to fully support their drug discovery initiatives and therefore need to outsource most, if not all, of their drug discovery research. Typical projects required by biotechnology companies are hit to lead and lead optimisation projects because the expert knowledge and synthesis capabilities offered by medicinal chemistry service providers can often overcome many of the problems that are associated with lead development to drive a compound through to market. Concerns over retention and establishment of intellectual property (IP) can be a barrier to outsourcing, especially for the larger pharmaceutical companies, who need to protect carefully any potential blockbuster drugs. Larger pharma tend to avoid those service providers who have a high turnover of staff in companies where confidentiality agreements and patents are difficult to enforce. This is because they realise full well the financial implications of losing proprietary knowledge. To combat this, outsourcers should consider working with those providers who behave ethically to their employees and who do not undertake their own early stage drug discovery research programmes. Value, not price, should be the main driver for companies to choose a service provider. The famous quotation from the English philosopher Ruskin remains, perhaps, even more valid today. John Ruskin (1819-1900) Looking for value? It’s unwise to pay too much but it’s unwise to pay too little. When you pay too much you lose a little money, that is all. When you pay too little, you sometimes lose everything, because the thing you bought was incapable of doing the thing you bought it to do. The common law of business balance prohibits paying a little and getting a lot. It can’t be done. If you deal with the lowest bidder, it’s well to add something for the risk you run. And if you do that, you will have enough to pay for something better! Clinical success rates from first clinical trial to registration Data obtained by Datamonitor in the Pharmaceutical Benchmarking Study.The data are from the 10 biggest drug companies, 1991-2000 Current challenges The greatest challenge is to reduce the number of drugs that fail in pre-clinical stages, since this is responsible for the very high cost of bringing a drug to market. The accountant’s mantra of ‘fail early/fail cheap’ is now central to the drug discovery process, although I suspect that most medicinal chemists would prefer to succeed early and to succeed cheap, but this involves considerable early design work. Analysis of ADMET properties, especially toxicology, and establishing the boundaries for drug design is the area undergoing the largest growth, as these properties largely decide whether a compound is rejected immediately or taken forward for optimisation. Outright show stoppers include irreversible protein binding, idiosyncratic toxicity, mutagenic toxicity, hERG, phospholipidosis, phototoxicity, while addressable problems could typically be due to oral bioavailability, weak Cyp inhibition, selectivity or minor solubility issues. Great strides have been made during the past 10 years to understand all these different drug design problems and better in silico predictive tools and cell-based assays are continually being developed. It is quite evident that ADME profiling using in silico and cellular approaches has successfully resulted in a reduced attrition rate. Predictive toxicology remains a big problem to be solved, since this is now responsible for a large proportion of failures in the pre-clinical stages. Chemistry service providers that have developed a range of in silico and mathematical capabilities are now employing additional in vitro approaches to test and predict the efficacy of potential drug compounds. In vitro screening has become a critical tool for the medicinal chemist to assess potential toxicology problems, enabling them to rank clinical candidates. The results from in vitro studies are often fed back into the in silico models to further refine and predict drug-likeness. To support their customers’ drug discovery initiatives, medicinal chemistry outsourcing companies must now include state of the art in vitro and in silico approaches. These companies must be a ‘one-stop-shop’ that can provide the necessary expertise to enable them to maintain their competitive advantage. Medicinal chemistry providers must, therefore, draw upon a greater range of disciplines to fully support customer initiatives through an innovative approach to drug discovery and development. Meeting the challenge Due to the ever-changing face of the industry, outsourcing providers must not only adapt rapidly to the industry pressures and change, but must also try and predict the future direction of the industry. This requires adaptability in the business model and also flexibility to undertake strategic partnerships with other service providers to support customers’ drug discovery initiatives. The key to successful partnering is the development of good relationships between scientists of all disciplines; regular face to face contact ensures a synergistic process that complements the capabilities and knowledge of both parties to develop a clear understanding based upon trust. Empowering chemists to take an entrepreneurial approach to chemistry ensures a customerfocused, specialised service. Medicinal chemists now, more than ever, need, in addition to their core expertise of synthetic organic chemistry and CADD, a broad range of expertise covering cell biology, pharmacology, formulation science and pharmacokinetics. We are entering a new chemogenetic age of knowledgedriven drug discovery where medicinal chemistry is driving the drug discovery process and medicinal chemists need to take the leadership role in all phases of the target identification to preclinical candidate phases. Dr Terry Hart joined Peakdale Molecular in 2005 as Medicinal Chemistry Services Director. He has more than 20 years’ experience in the pharmaceutical and biotechnology industry both in drug discovery research and in senior management roles. Prior to joining Peakdale Molecular Terry served with two of the top pharma companies: RPR (Sanofi-Aventis) and Novartis. While at Novartis Terry was the co-inventor on the patents for compounds which have entered clinical development for pain, schizophrenia and anxiety.
http://www.ddw-online.com/chemistry/p97059-medicinal%20chemistry%3A%20progress%20through%20innovation.%20%20%20summer%2006.html
Applied Clinical Trials The need for statisticians, mathematicians, and computer and data scientists to collaborate on modern methods for “substantial evidence” in drug development is critical. Even at the risk of being accused of heresy, we think that moving beyond traditional statistics is overdue. And here is why: Sir Roland Fisher, who has been described as “a genius who almost single-handedly created the foundations for modern statistical science” was born in 1890. Randomized controlled trials, which many consider to be the gold standard of clinical research, were developed in the 1940s. This was all way before anyone had the slightest idea about big data, machine learning, neural networks, deep learning, artificial intelligence, etc. Those old methods were created for relatively small and simple datasets and before we really understood the complexity of biological systems, where interrelated and interdependent parameters always play together to generate a certain physiological output. But these methods still form the basis for modern medical research. We believe that a drug “works” if the difference of the means of the effect size of a variable in a large treatment group compared to a large control group is statistically significant. But, unfortunately, the “mean patient” rarely exists. Therefore, individual patients in the real world react often very differently to a specific drug than what has been predicted by the “mean” of a clinical trial. We must acknowledge that these century-old scientific methods have significant limitations, which, in our view, hamper the progress of modern medicine. A mean derived from n=1000 patients has little meaning for personalized medicine where n=1. The need for new, better ways for substantial evidence generation has become painfully obvious in the current COVID-19 pandemic. While there are many investigational drugs against the coronavirus, thousands of patients are still dying because these drugs are not approved for broader use due to the lack of traditional clinical evidence. This evidence is currently derived from randomized controlled trials that take months to years to complete. We think nothing illustrates the failure of the old methodologies more than the fact that a large number of people lose their lives because the evidence generation takes simply too long to save them. We urgently need statisticians, mathematicians, and computer and data scientists to come together and develop, with the tools of modern digital sciences, new 21st century methodologies for “substantial evidence” generation in a global health crisis. The signs of how powerful and game-changing these new methodologies can be are already here: • AI is beginning to surpass human radiologists’ in its ability to diagnose disease.1 • AI and advanced machine learning methods are starting to show promise in their ability to accelerate the discovery of novel therapeutics. Many biopharmaceutical companies and AI startups are betting that with enough data, these methods will work so well that they will help to accelerate the discovery of new therapies for the novel corona virus, 2019-nCoV.2 • Causal AI methods might be uniquely positioned to discover underlying causes of disease and clinical response to treatment on an individual level, making personalized medicine real. This approach leverages the richness of multimodal patient data from genomic, molecular, imaging across cells and tissues, deep and digital phenotyping, labs, and clinical measure across many individuals to train models that predict causal drivers of disease and response to treatment. Several results have been published demonstrating ability to find causal molecular drivers that emerge as a result of using AI to learning complex networks, underlying the disease with the goal of using these insights to better target treatments to patients in clinical trials and eventually at the point of care.3, 4 We envision a future where these new tools and methods, developed under solid mathematical grounding, will enable us to go beyond the restrictions of traditional statistics, which are limited by sample sizes, the inadequacy of p-values as the metric for statistical significance,5 and the limitations of multiple hypothesis testing. Looking at the big picture, they will reduce the time and cost of drug development substantially, and, more importantly, they will help to save the lives of patients desperately waiting for new, effective treatments.
https://www.appliedclinicaltrialsonline.com/view/pandemic-highlights-urgency-new-methodologies-evidence-generation
Context: In recent decades, natural products have undisputedly played a leading role in the development of novel medicines. Yet, trends in the pharmaceutical industry at the level of research investments indicate that natural product research is neither prioritized nor perceived as fruitful in drug discovery programmes as compared with incremental structural modifications and large volume HTS screening of synthetics. Aim: We seek to understand this phenomenon through insights from highly experienced natural product experts in industry and academia. Method: We conducted a survey including a series of qualitative and quantitative questions related to current insights and prospective developments in natural product drug development. The survey was completed by a cross-section of 52 respondents in industry and academia. Results: One recurrent theme is the dissonance between the perceived high potential of NP as drug leads among individuals and the survey participants' assessment of the overall industry and/or company level strategies and their success. The study's industry and academic respondents did not perceive current discovery efforts as more effective as compared with previous decades, yet industry contacts perceived higher hit rates in HTS efforts as compared with academic respondents. Surprisingly, many industry contacts were highly critical to prevalent company and industry-wide drug discovery strategies indicating a high level of dissatisfaction within the industry. Conclusions: These findings support the notion that there is an increasing gap in perception between the effectiveness of well established, commercially widespread drug discovery strategies between those working in industry and academic experts. This research seeks to shed light on this gap and aid in furthering natural product discovery endeavors through an analysis of current bottlenecks in industry drug discovery programmes. Introduction Historically, natural products (NP) development has been a field of immense interest to medical, commercial, and scientific communities worldwide. As isolation and purification techniques advanced, NP increasingly became prime candidates for drug leads and drug discovery efforts (Cragg et al., 1997; Heinrich and Gibbons, 2001; Heinrich, 2013). Their diversity characterizes them as a virtually limitless source of novel lead compounds. Yet in the last decade, the majority of multinational pharmaceutical companies have reduced NP Research and Development (R&D) expenditures (David et al., 2014). Many of the largest pharmaceutical companies are aggressively downsizing internal scientific research teams and reducing cost/risk by focusing instead on acquisitions of SMEs, which do the bulk of discovery “legwork” for a particular compound as it gets pushed through the pipeline. What are elements behind this? What are the common drivers and barriers in natural product development? How can efforts to understand such drivers and barriers (Amirkia and Heinrich, 2014) enhance our ability to further leverage the potential of NPs? If NPs have historically been such an important source of new medicines, what insights can we gain into the NP drug development process of academic stakeholders as compared with the widely recognized slowdown of industry efforts? What are the differences in insights between successes of NP drug discovery today among in industry and academia? Through this research, the authors seek to gain insight into these questions by directly soliciting the views of an unprecedentedly large panel of pharmaceutical industry experts who currently serve in senior positions in academic/commercial organizations. To be best of our knowledge, it is the first published survey of stakeholders in the NP drug discovery sector. Backgorund: Current Context of Drivers and Barriers in Natural Product Development A brief but representative selection of NP development and drug discovery-related opinion, review, and primary literature published over the last two decades shows a range of varied, often contrasting viewpoints on the potential of NPs as drug leads/candidates (Table 1). The majority of published literature hails the potential of NPs as sources of structurally novel, highly diverse compounds and cites examples of how NPs comprise a high proportion of successfully marketed new medicines over the last 20 years. The voice of optimism is loud and clear and has generally overshadowed a number of critical voices which have pointed out major challenges in NP development such as extraction and supply issues (McChesney et al., 2007). Additionally, many have added fuel to this debate through focusing on academia-industry partnership initiatives, inter-disciplinary approaches such as virtual screening methods and genomics efforts; one example being Shen's paper in 2003 which outlined three main advantages of virtual screening of natural products. He argued that virtual screening provides: higher hit rates as compared with typical HTS assays thus saving time/cost. Additionally he considers it to be more effective in investigating the about 90% of the “natural diversity” which so far have not been explored (defined as species which have yet to be studied systematically in research settings), and increased prediction of ADME/Tox and other drug like properties which may show promise in diminishing missed/failed hits (Shen et al., 2003; Bohlin et al., 2010). Table 1. Selection of representative publications on the outlook of NPs as drug leads in modern drug discovery programs and their overall levels of optimism. One limitation of many if not all of these studies is that in essence they are based either on an assessment of new drug (leads) or are basically opinion papers. The authors normally had not engaged with a substantial number of stakeholders from within the pharmaceutical industry; most importantly those who currently work in the industry. Understandably so, not only is it challenging to track down a meaningful number of industry decision makers with experience in NP drug development but perhaps the larger challenge is eliciting their views (often which are critical of their superiors) pertaining to their company's strategy and/or industry trends. The authors believe that this internal lens, through angles such as commercial operations, strategic planning, research and development, and senior management is essential in gaining a clearer understanding of the role of NP discovery and development as it contributes to drug development in general, as well as the gaps, and potential advances in academia-industry partnerships to advance drug discovery efforts. Methods: Engaging Industry Contacts A panel of industry and academic contacts (most of which are personal contacts of one or both of the authors) were personally invited to participate and submit insights to a natural products development survey which was hosted online (Google Forms—http://forms.google.com). A snowballing strategy was used to increase the number of contacts. Industry contacts represented many of the major multinational pharmaceutical companies such as Merck, Novartis, GSK, Pfizer, AZ, and Bayer. Seniority of each respondent varied with respect to his or her organization. Titles of respondents included: Chief Scientific Officer (CSO), President, Vice President (VP), Group Leader, Senior Analytical Chemist, and Senior Principal Scientist (Appendix 1 in Supplementary Material) among others. Academic contacts originated from eight different countries including Brazil, Oman, New Zealand, UK, and USA. The majority of academic respondents were full-time academics, five of which also hold senior roles in pharmaceutical-company related organizations (consultancy, clinical research, and/or pharmaceutical entities). The panel is clearly limited in its geographical coverage of smaller pharmaceutical markets such as Asia, Japan, and Latin America; markets which represented approximately 11, 9, and 5% of 2014 total worldwide pharmaceutical sales, respectively (IMS Health, 2014). Nevertheless, barring the extreme of labeling the panel as strictly representative of “the industry,” the authors feel that the panel of contacts is generally representative of trends of interest within the industry. There were four primary goals which were considered in designing the questionnaire: (1) To understand perceived drivers and barriers in NP drug discovery efforts. (2) To understand what respondents identify as “current preferred strategies” for discovering new medicines in industry today. (3) As HTS stands as a prevalent tool in in drug discovery today we wanted to elicit perceptions of the efficacy of NPs as compared with other classes of compounds in screens. (4) To understand the respondent's general outlook on future drug discovery as a whole. This approach would allow the authors to better understand the perceived effectiveness of past, present, and future NP drug discovery efforts and more importantly compare any potential similarities and differences in insights between academic and industry respondents. The survey consisted of a series of six quantitative and qualitative close-ended questions followed by 10 profile and background related questions. Close-ended questions with several choices had an “other” box for the respondent to fill in his/her response which allows for valuable straightforward feedback from respondents and helps overcome the limitations of extensive statistical analyses based of a small sample size. Multiple choice selections were displayed in randomized order for each survey so as to control for position bias in responses. Close-ended questions were comprised of: (1) In your opinion, what are the top 2 current preferred strategies for drug discovery? (2) Based on your experience or on your assessment, approximately how many agents based on natural products and alkaloids researched in commercial R&D facilities make it to market as pharmaceutical products? (3) From your experience, what have been the major drivers to natural product development in industry? (4) From your experience, what have been the major barriers to natural product development in industry? (5) Drug Discovery is a history of triumphs and failures. Compared to last decades how successful is the industry today in discovering new medicines? (6) What is your outlook on the future viability (rate at which pharmaceuticals are developed and launched to market) of natural products, serving either as final pharmaceutical products or as leads to the development of the final pharmaceutical products? The six close-ended questions each had an open field for participants to provide additional thoughts. Total completions of the survey ended at 52 responses after 14 weeks spanning from January to May 2015. Results and Discussion Overview of Findings One major, consistent theme across respondents was the dissonance between what survey participants in industry perceived as the potential of NP as drug leads and overall industry and/or company level strategies. Large scale, structural modification processes (i.e., HTS) have become Big Pharma's go-to-strategy for honing in on successful leads. HTS typically avoids the need to continuously source and verify new NP material, which matches the highest citied barrier from industry contacts in our survey (i.e., a secure supply). Additionally, large HTS screening programmes are argued by many (Macarron et al., 2011) to be more cost-effective in the long run which is also in line with the third largest barrier cited by our industry contacts (i.e., cost/funding/budget). Sample sizes of respondent groups are not sufficiently large enough to perform useful statistical analysis yet general results are summarized below (Table 2). Two other major themes emerged from participants writing insights in the open space provided after each close-ended question and are illustrated with a selection of verbatim statements pertinent to each theme. Ineffectiveness of Current HTS Drug Discovery Programmes Industry efforts which boast large libraries and cutting edge screening technologies have gained momentum which in turn has overshadowed smaller, more unique and fruitful discovery efforts. • “The industry focus on numbers (quantity vs. quality) has counted against natural products discovery—and the belief that supply of material on a suitable scale might be difficult (which may be a misconception).” • “Industry is driven by numbers and processes; HTS, you could include fragment screening in this too—or even billions of compounds on encoded libraries (as we have at [X] company). I am part of a group who strongly advocate the huge impact of proper attention to physical properties and efficiency as existing leads are optimized (sadly mostly derived from the numbers generated above). HTS yet is the adopted strategy; in my opinion is probably isn't the most preferred!” • “These approaches [High throughput screening (HTS), Combinatorial Chemistry] are favored by many pharmaceutical companies, even though they have not been notably successful.” • “HTS depends on large libraries, most of which have been so thoroughly studied that their utility going forward must be considered modest.” • “My understanding is that physicochemical modifications of existing leads represents the vast majority of drug discovery, and there are few places which are supporting anything beyond HTS or medicinal chemistry cycles.” • “Based on our internal track record, the outcome of HTS or VS is heavily dependent on the quality (control) of the actives and their ligand efficiency and the access to orthogonal assays to confirm the activity. These methods also complement each other and can be supported by additional methods, e.g., fragment-based. They are also generally easier to strip to the ‘core’ and obtain initial SAR. With natural products, you need to be lucky with the minor metabolites yielding some useful SAR. Nevertheless, our experience at X University…screening endogenous X species was successfully generating leads that were NOT pursued as chemists perceived the SAR work to have low feasibility.” • “In industry, modification of existing structures whether already in-house identified compounds or to bypass other structures with patent protection is much more common. This allows for the creation of “me too” therapeutic agents. Bioprospecting is much more common in academia, but natural product identification seems to be decreasing on the whole. Whether this is purely due to funding issues or a broader shift in the field is not certain. Similarly, virtual/computation approaches are used in refining structure in industry but are essentially never used to de novo identify a drug. “Many academic labs have used such strategies as well, but with few successes. HTS is still fairly common place in industry and is gaining greater traction in academic settings with more and more universities creating screening facilities. Serendipity is certainly an important part of drug discovery, especially in areas such as neurology, but no one would bet on winning the lottery to fund their lab.” • “HTS has been an abject failure in terms of discovery, due in most cases to not thinking about transfer across membrane issues when trying to go from a hit to an active in cells/animals. If one uses phenotypic screening (a dirty term amongst screeners in Pharma!), then if you see a valid effect, you will be well ahead of any HTS assay in vitro.” • “In Pharma, HTS is the buzzword. I know of screens where over 1 million synthetic compounds have produced nothing, many times. Natural products in phenotypic screens are between 10,000 and 100,000 depending upon what is known about potential mechanisms etc.” The Second Key Theme That Emerged Centers around the Lack of Support/interest in Organization for NP Drug Development Efforts Industry strategy over the last few decades has taken its form against a NP-centric strategy and is unlikely to change. • “Don't fit company strategy.” • “Executive management fiat. Senior and executive scientific management at most Big Pharma wrote off natural products in the late ‘80s and early ‘90s with the advent of HTS, believing that HTS would have all of the answers.” • “Hostility; No support.” • “Lack of will to study them.” • “Natural product discovery tends to require a group to champion the approach. In my experience med chemists don't switch between synthetic chemistry and natural product chemistry. The latter requires an infrastructure and senior champions who believe in the potential of the approach. The novelty of the structures that result often go beyond anything that a med chemist might consider synthesizing as such this can take you to places you wouldn't have got to by any other route.” • “Screening of synthetic chemicals in massive libraries is cheap and most often results in hits that can be optimized as leads effective against sign targets. This process discounts any deep understanding of the biological processes involved in a disease state, other than the role played by an individual target biomolecule (kinases, etc.). And, the chemistry involved in elaborating these often simple structures is easy and high throughput—so from the chemists standpoint—why knock yourself out with NP modifications which are often more difficult? Regrettably in industry little credit is given for the extra effort and overall productivity will appear low.” • “The major driver to natural product development in industry is to eliminate it, which is what most of the large pharma companies have in fact done.” • “From my perspective, today, natural products make only sense as starting materials for further optimization. I am convinced that we will see less and less original natural products that make it to the market in human pharma (animal health may be a different story). Also TCM et al. may be a different story.” • “NPs are currently not the “flavor of the month or decade” but now days, chemists are looking for structural leads that may well have activity, due to the failure of combi-chem as a discovery tool.” It is interesting to note that such an open question in fact only elucidated two key reasons why natural products are poorly represented in such drug discovery processes. Open questions are often used to elicit a wider set of views (Heinrich et al., 2009) and here a clear focus on two concerns emerged indicating a very strong consensus on that these are seen as the key issues. Perceived Viability of Natural Products among Current Drug Discovery Programmes Gaining insights into perceptions of the drivers and barriers of NP drug discovery is a helpful yet limited step in providing insights into the drug development process. This data does not convincingly indicate the “effectiveness” of NPs in drug discovery as compared to other commonly researched classes of compounds. Thus, we asked respondents to provide an approximate ratio of “success rates” for several classes of compounds in the following way: Based on your experience or on your assessment, approximately how many synthetic [or Biologics, Natural Products, Alkaloids] agents researched in commercial R&D facilities make it to market as pharmaceutical products? This resulted in four sets of answers relevant for each of these groups. Interestingly, all perceived “hit rates” for industry respondents are higher than those academic respondents reported (Table 3). Industry respondent “hit rates” were higher at a rate ranging between 1.8 and 3.1-times. This may reaffirm our other finding that in general, senior stakeholders in industry typically do support NP-centric discovery strategies, and hence the more perceived frequent “hits.” Conversely, this indicates that screening programmes related to academic efforts, particularly with respect to HTS, are not perceived as being as useful more widespread industrial efforts. Does this mean that industry is more “productive” than academia in screening for natural products? Not necessarily, as this question does not attempt to equalize all screening methods but rather gain a general indication of respondent's perceptions toward screening efforts. Additionally, it is also surprising to note that there is a larger gap between “hit rates” reported between NPs and synthetics for industry vs. academic respondents; 8-times vs. 5-times, respectively. This also indicates a reaffirmation to our previous observation that many working in industry—regardless of their role and their level of dissatisfaction with the strategic direction of their organization, still perceive strong relative potential in NP drug development as compared with currently prevalent synthetic-centric strategies. Table 3. Respondent's estimates of how many agents researched commercial R&D facilities make it to market as pharmaceutical products (defined as the “hit rate”). Our goal in asking this question is two-fold; to gain a general indicator of perceived “success/hit rates” of NPs against other compound classes as well as compare the perception of “success rates” against previous claims published over the years by industry observers (Shen et al., 2003). There are two limitations to this question. The first is that each respondent may define “researched” in a completely differing way. To one respondent a compound is not “researched” until it perhaps enters a HTS program, while to another, a compound merely existing in a company compound library may count as being “researched.” The second is the definition of the compound class (for example: Where does a NP which has been structurally modified fit?). Of course, there are numerous variables in any screen (compound library itself, target/ligands, parameters for defining a successful “hit,” purpose of screen, etc…) that make a particular screen entirely unique and incomparable to another. Potential Steps Forward Since 35 of the 52 respondents (35 of all 105 individual answers to this open question) listed HTS as a “top preferred current strategy,” it is logical that this should be a focus of our analysis. Many publications have cited barriers to NP drug development. In 2004 Jean-Yves Ortholand, who at the time worked at Merck in France, listed six major drawbacks in programmes screening natural products: expense, time, novelty, tractability, scale-up, and intellectual property (Ortholand and Ganesan, 2004). In looking to our industry feedback, each of Ortholand's “drawbacks” are corroborated to some extent with a particular focus on supply and cost/funding. It is noteworthy that these two highly cited barriers do not directly involve the actual screen itself but rather affect the feasibility of pre/post-screen efforts. The most frequently cited barriers seem to be those which prevent a screen from happening in the first place (i.e., budget/cost or company strategy) or from moving from early stage screening to pre-clinical development (i.e., supply, scale-up). Therefore, besides proposing the obvious that costs should be reduced and/or funding increased for NP drug discovery efforts, are there potential cost-sensitive resolutions to the supply/scale-up barrier? Our previous research (Amirkia and Heinrich, 2014) looked at the problems of supply in the context of source species abundance data of pharmaceutical alkaloids. We showed that source species of pharmaceutical alkaloids are on average 4.3 times more “abundant” [as defined by the Global Biodiversity Information Facility (GBIF) species abundance dataset] than a randomly picked non-pharmaceutical alkaloid. Alkaloid containing species yielding medicines are thus much more widely distributed than species which yield alkaloids not used pharmaceutically. This suggests that such a dataset is sufficiently significant for modeling supply constraints which are so often cited in NP related literature. Although this initial analysis was performed on alkaloids, it can be applied to any class of NPs. Taken together our data show that such an analysis can be augmented with other metrics such as number of countries which host species naturally occur in or density of source species abundance (Amirkia and Heinrich, 2014). Here key questions include: • Is the source species widely spread across one region or densely found in one small area? • How many countries does the species naturally grow in? • How are occurrences of the species in the dataset distributed across time? Were instances discovered and recorded decades ago or have records been relatively consistent? Instead a systematic assessment of a species' abundance can play a constructive pre-screening or filtration role in NP drug discovery programmes which directly addresses one of the key concerns of the stakeholders who contributed to this study. Costs for such analyses are minimal compared to R&D budgets common to pharmaceutical companies today. Additionally such analyses need not necessarily be exclusively seen as applicable to screening programmes for candidates or leads but may prove to be of value in other NP-related endeavors. For example, companies which are heavily invested or interested in TCM, Ayurveda, and other traditional medicine centric portfolios may use this approach to optimize procurement or investment processes. Compounds which originate from source species which are becoming increasingly abundant may hold more promise long-term sustainability in production and marketability. Conclusions NPs are seen as important sources of new medicines by industry stakeholders, yet, the industry is spending fewer and fewer resources on their discovery and development. In the last decade numerous voices have highlighted this concern with the vast majority of these originating from academic and industry observers (Niedergassel and Leker, 2009; Tralau-Stewart et al., 2009; Khanna, 2012). Two thirds of panel responses cited HTS as the preferred strategy for drug discovery in industry today and NPs are seen as yielding higher “hit rates,” thus industry attention must not turn away from NPs if the industry seeks innovation. This gap must be explored if we are to move natural product drug discovery forward and virtual non-ligand machine learning, at least for alkaloids, can serve as a starting point to guide multidisciplinary drug discovery efforts. While this study has some limitations both in terms of the overall size of the sample and the (self-) selection of participants, the voices from within clearly highlight some key concerns, which can be overcome by implementing a modified strategy in NP-driven drug development. Minimizing one of the most commonly cited barriers (i.e., supply) by seeking to quantifying it and developing strategies for incorporating solutions at an early stage of screening programs is one approach which has demonstrated promise for alkaloids and can be applied to other NP drug discovery efforts. The work also strengthens the case that “weeds” are an important source of drugs (Stepp, 2004), but offers a quantifiable parameter to assess such “weediness.” Continuing along the current path of large “numbers-driven” screens which boast millions of compounds not only has irritated many, at all levels, in the industry but more importantly stifled the growth and development of the single most productive source of potential leads for new medicines to date; nature. Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Acknowledgments Much appreciation to all respondents which contributed to this research. A special thanks especially to those industry contacts which despite their extremely busy schedules found time to elaborate on their insights, share candidly their views about company and industry level developments, and to help distribute the questionnaire. This project received no external funding. Supplementary Material The Supplementary Material for this article can be found online at: https://www.frontiersin.org/article/10.3389/fphar.2015.00237 Abbreviations GBIF, Global Biodiversity Information Facility; HTS, high throughput screening; NP, natural products; SAR, structure-activity relationships; SD, standard deviation; SME, small medium enterprise; VS, virtual screening. Footnotes 1. ^Respondents could select or list more than one response. All responses were added together and a “response rate” was calculated by taking the percentage of a particular response as a total of all responses. 2. ^Respondents selected one response. All responses were added together and a “response rate” was calculated by taking the percentage of a particular response as a total of all responses. References Amirkia, V., and Heinrich, M. (2014). Alkaloids as drug leads–A predictive structural and biodiversity-based analysis. Phytochem. Lett. 10, xlviii–liii. doi: 10.1016/j.phytol.2014.06.015 Baker, D. D., Chu, M., Oza, U., and Rajgarhia, V. (2007). The value of natural products to future pharmaceutical discovery. Nat. Prod. Rep. 24, 1225–1244. doi: 10.1039/b602241n Balunas, M. J., and Kinghorn, A. D. (2005). Drug discovery from medicinal plants. Life Sci. 78, 431–441. doi: 10.1016/j.lfs.2005.09.012 Bohlin, L., Göransson, U., Alsmark, C., Wedén, C., and Backlund, A. (2010). Natural products in modern life science. Phytochem. Rev. 9, 279–301. doi: 10.1007/s11101-009-9160-6 Butler, M. S. (2004). The role of natural product chemistry in drug discovery. J. Nat. Prod. 67, 2141–2153. doi: 10.1021/np040106y Chin, Y. W., Balunas, M. J., Chai, H. B., and Kinghorn, A. D. (2006). Drug discovery from natural sources. AAPS J. 8, E239–E253. doi: 10.1007/BF02854894 Corson, T. W., and Crews, C. M. (2007). Molecular understanding and modern application of traditional medicines: triumphs and trials. Cell 130, 769–774. doi: 10.1016/j.cell.2007.08.021 Cragg, G. M., and Newman, D. J. (2001). Natural product drug discovery in the next millennium. Pharm. Biol. 39(Suppl. 1), 8–17. doi: 10.1076/phbi.39.s1.8.0009 Cragg, G. M., Newman, D. J., and Rosenthal, J. (2012). The impact of the United Nations convention on biological diversity on natural products research. Nat. Prod. Rep. 29, 1407–1423. doi: 10.1039/c2np20091k Cragg, G. M., Newman, D. J., and Snader, K. M. (1997). Natural products in drug discovery and development. J. Nat. Prod. 60, 52–60. doi: 10.1021/np9604893 David, B., Wolfender, J. L., and Dias, D. A. (2014). The pharmaceutical industry and natural products: historical status and new trends. Phytochem. Rev. 14, 299–315. doi: 10.1007/s11101-014-9367-z Gullo, V. P., McAlpine, J., Lam, K. S., Baker, D., and Petersen, F. (2006). Drug discovery from natural products. J. Indust. Microbiol. Biotechnol. 33, 523–531. doi: 10.1007/s10295-006-0107-2 Harvey, A. L. (2008). Natural products in drug discovery. Drug Discov. Today 13, 894–901. doi: 10.1016/j.drudis.2008.07.004 Harvey, A. L., Edrada-Ebel, R., and Quinn, R. J. (2015). The re-emergence of natural products for drug discovery in the genomics era. Nat. Rev. Drug Discov. 14, 111–129. doi: 10.1038/nrd4510 Heinrich, M. (2013). “Ethnopharmacology and drug discovery,” in Elsevier Reference Module in Chemistry, Molecular Sciences and Chemical Engineering, eds Reedijk, J. (Waltham, MA: Elsevier), 1–24. Heinrich, M., Edwards, S., Moerman, D. E., and Leonti, M. (2009). Ethnopharmacological field studies: a critical assessment of their conceptual basis and methods. J. Ethnopharmacol. 124, 1–17. doi: 10.1016/j.jep.2009.03.043 Heinrich, M., and Gibbons, S. (2001). Ethnopharmacology in drug discovery: an analysis of its role and potential contribution. J. Pharm. Pharmacol. 53, 425–432. doi: 10.1211/0022357011775712 IMS Health. (2014). Total Unaudited and Audited Global Pharmaceutical Market by Region. IMS Health Report. Jachak, S. M., and Saklani, A. (2007). Challenges and opportunities in drug discovery from plants. Curr. Sci. 92, 1251. Khanna, I. (2012). Drug discovery in pharmaceutical industry: productivity challenges and trends. Drug Discov. Today 17, 1088–1102. doi: 10.1016/j.drudis.2012.05.007 Kingston, D. G. (2010). Modern natural products drug discovery and its relevance to biodiversity conservation. J. Nat. Prod. 74, 496–511. doi: 10.1021/np100550t Koehn, F. E., and Carter, G. T. (2005). The evolving role of natural products in drug discovery. Nat. Rev. Drug Discov. 4, 206–220. doi: 10.1038/nrd1657 Lam, K. S. (2007). New aspects of natural products in drug discovery. Trends Microbiol. 15, 279–289. doi: 10.1016/j.tim.2007.04.001 Li, J. W. H., and Vederas, J. C. (2009). Drug discovery and natural products: end of an era or an endless frontier? Science 325, 161–165. doi: 10.1126/science.1168243 Macarron, R., Banks, M. N., Bojanic, D., Burns, D. J., Cirovic, D. A., Garyantes, T., et al. (2011). Impact of high-throughput screening in biomedical research. Nat. Rev. Drug Discov. 10, 188–195. doi: 10.1038/nrd3368 McChesney, J. D., Venkataraman, S. K., and Henri, J. T. (2007). Plant natural products: back to the future or into extinction? Phytochemistry 68, 2015–2022. doi: 10.1016/j.phytochem.2007.04.032 Mishra, K. P., Ganju, L., Sairam, M., Banerjee, P. K., and Sawhney, R. C. (2008). A review of high throughput technology for the screening of natural products. Biomed. Pharmacother. 62, 94–98. doi: 10.1016/j.biopha.2007.06.012 Niedergassel, B., and Leker, J. (2009). Open innovation: chances and challenges for the pharmaceutical industry. Future Med. Chem. 1, 1197–1200. doi: 10.4155/fmc.09.107 Ortholand, J. Y., and Ganesan, A. (2004). Natural products and combinatorial chemistry: back to the future. Curr. Opin. Chem. Biol. 8, 271–280. doi: 10.1016/j.cbpa.2004.04.011 Paterson, I., and Anderson, E. A. (2005). The renaissance of natural products as drug candidates. Science 310, 451. doi: 10.1126/science.1116364 Rishton, G. M. (2008). Natural products as a robust source of new drugs and drug leads: past successes and present day issues. Am. J. Cardiol. 101, S43–S49. doi: 10.1016/j.amjcard.2008.02.007 Shen, J., Xu, X., Cheng, F., Liu, H., Luo, X., Shen, J., et al. (2003). Virtual screening on natural products for discovering active compounds and target information. Curr. Med. Chem. 10, 2327–2342. doi: 10.2174/0929867033456729 Shu, Y. Z. (1998). Recent natural products based drug development: a pharmaceutical industry perspective. J. Nat. Prod. 61, 1053–1071. doi: 10.1021/np9800102 Stepp, J. R. (2004). The role of weeds as sources of pharmaceuticals. J. Ethnopharmacol. 92, 163–166. doi: 10.1016/j.jep.2004.03.002 Tralau-Stewart, C. J., Wyatt, C. A., Kleyn, D. E., and Ayad, A. (2009). Drug discovery: new models for industry–academic partnerships. Drug Discov. Today 14, 95–101. doi: 10.1016/j.drudis.2008.10.003 Vuorelaa, P., Leinonenb, M., Saikkuc, P., Tammelaa, P., Rauhad, J. P., Wennberge, T., et al. (2004). Natural products in the process of finding new drug candidates. Curr. Med. Chem. 11, 1375–1389. doi: 10.2174/0929867043365116 Keywords: natural products, drug discovery, academia-industry links, Big Pharma, HTS, strategy Citation: Amirkia V and Heinrich M (2015) Natural products and drug discovery: a survey of stakeholders in industry and academia. Front. Pharmacol. 6:237. doi: 10.3389/fphar.2015.00237 Received: 25 July 2015; Accepted: 02 October 2015; Published: 26 October 2015. Edited by:Chiranjib Chakraborty, Galgotias University, India Reviewed by:Shuowei Cai, University of Massachusetts Dartmouth, USA Rahman M. Mizanur, US Army Medical Research Institute of Infectious Diseases, USA Copyright © 2015 Amirkia and Heinrich. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
https://www.frontiersin.org/articles/10.3389/fphar.2015.00237/full
London-headquartered clinical-stage AI-enabled drug discovery company, BenevolentAI has begun trading on Euronext Amsterdam. The listing follows the completion of its business combination with Amsterdam-listed Odyssey. Odyssey had bought BenevolentAI in a deal last year valuing the British-based medtech firm at up to €1.5 billion after the transaction. The merger represents the largest ever European SPAC merger and one of Amsterdam’s biggest biotech listings, according to a statement by the companies. The completion of the business combination with Odyssey, and the proceeds from the transaction, will further enable BenevolentAI to accelerate the development of its clinical pipeline and continue investing in its technology platform. Joanna Shields, CEO, BenevolentAI, said: “Our AI platform is fully operational, scientifically validated, and supports an in-house pipeline of over 20 platform-generated drug programmes and successful commercial collaborations. Through this listing, we are well-placed to strengthen our leadership position, ensuring we can continue to accelerate investment in the development of our pipeline and technology.” Through the combined capabilities of its AI platform, scientific expertise and wet-lab facilities, the platform is well-positioned to deliver novel drug candidates with a higher probability of clinical success than those developed using traditional methods. The startup powers a growing in-house pipeline of over 20 drug programmes, spanning from target discovery to clinical studies.
https://tech.eu/2022/04/25/londons-benevolentai-lists-on-euronext-amsterdam-to-boost-drug-discovery-in-europe
Posted: Wed, 01/12/2022 - 10:00 Verge Genomics is a next-generation biopharmaceutical start-up using human genomics to accelerate development of life-saving treatments for neurodegenerative diseases. Our platform uses patient tissue profiling, human genetics, and machine learning to identify new therapeutic gene targets, predict effective drugs, and stratify patient subpopulations for increased clinical success. Verge's approach offers a breakthrough opportunity to identify drugs that dramatically improve patient outcomes and fundamentally lower the cost curve of pharmaceutical development. The newly formed Translational Genomics team is seeking a creative and motivated Data Scientist / Computational Biologist. The Data Scientist will work closely with PhD scientists to design and execute analyses that bring forth actionable insights for the drug discovery process. This position will make extensive use of large internal and external biological datasets, such as high throughput sequencing, proteomics, drug development, toxicity, and chemical tractability databases to aid in prioritization of novel drug targets and biomarkers for severe neurodegenerative diseases. Working with a diversity of biological data types and analyses will provide an exciting and dynamic experience with potential for rapid growth of responsibilities. The position is open to remote work with occasional travel to Verge in South San Francisco. What You'll Do - Interface with scientists across computational, translational genomics, and target validation/exploratory biology groups to understand unmet needs and help translate human -omics data insights into actionable experimental follow-up and drug target decisions - Develop scripts to interrogate large biological datasets, document analyses, perform basic statistical tests, distill sound conclusions, and communicate findings to the scientific team - Apply machine learning techniques to identify non-obvious connections between gene, protein, drug, or other feature types and probability of drug target success - Proactively identify new data types and analysis methods of value to drug target or biomarker discovery efforts for neurodegenerative diseases Job Requirements & Technical Competencies, Required: - MS or PhD degree in computational biology, computer science, data science, statistics, or similar discipline plus zero to two years post-graduate experience, preferably in biotech industry roles - Fluency with Python and/or R programming, preferably in cloud-based environments such as a Linux/Unix environment on Amazon Web Services - Familiarity with SQL or other relational database querying and theory - Experience working with high throughput genomics datasets of at least one type, such as DNA or RNA sequencing from Illumina platforms - Ability to communicate technical details to a diverse audience and collaborate across our organization Job Requirements & Technical Competencies, Nice to Have: - Fluency with basic concepts of molecular biology and multiple contemporary “-omics” methods is a plus - Experience with machine learning techniques and scikit-learn package is a plus - Familiarity with software version control (Git repositories) and experience documenting analyses or script development with Jupyter notebooks or similar is a plus - Experience working with additional large biological data types, such as proteomics, metabolomics, imaging data, gene ontology databases, etc. is a plus - Experience with neurological disease, drug discovery or related research is a plus Help us revolutionize the way drugs are discovered & developed: The startup nature of Verge Genomics provides multiple growth opportunities into other areas of the company. As one of the early employees at Verge, your work will have a direct impact on the foundation of a groundbreaking new drug development model. In addition to competitive compensation and benefits, we offer perks like unlimited vacation/sick days, on-site gym access, and free lunch.
https://cs.uoregon.edu/node/1484
PARAMUS, N.J.–(BUSINESS WIRE)–PsychoGenics Inc. announced the appointment of Stephen Morairty, Ph.D., as Vice President of Translational Neuroscience and Jean-Sebastien Valois, MSc, as Vice President of Engineering and AI Development. These key leadership roles support PsychoGenics as it grows its preclinical CRO and drug discovery operations, including novel technology platforms called SmartCube®, PhenoCube®, NeuroCube® and eCube™. Dr. Morairty brings over 30 years of research and drug development experience focusing on translational neuroscience and EEG biomarkers. For the last 22 years he was at SRI International, most recently as Senior Director, leading collaborative efforts with industry on preclinical drug development through their Center for Neuroscience. He has collaborated with over 30 companies across broad indications including neurodegeneration (Alzheimer’s disease and narcolepsy), neuropsychiatry (bipolar disorder and schizophrenia) and neurodevelopmental disorders (autism). “We are pleased to welcome Dr. Morairty to lead our expanding translational neuroscience team,” said Dr. Emer Leahy, President and CEO of PsychoGenics. “His experience, creativity and knowledge will be invaluable as we expand our EEG group and integrate eCube™, our AI-based EEG screening platform, into our partnered and internal drug discovery efforts. I am confident Stephen will make immediate contributions to the company’s success and add tremendous value to the PsychoGenics drug pipeline, our clients and collaborators, and the millions of patients in need of additional treatment options.” Jean-Sebastien Valois joins PsychoGenics with over 25 years of artificial intelligence experience collaborating with world-class researchers and scientists at Uber, NASA, Carnegie Mellon University (CMU) and Aurora Innovations. During his time at Uber and Aurora, he pioneered deep learning methods to further self-driving vehicle research. While at NASA he worked on the Space Vision System, an AI robotics-aid which provided precise payload localization data for astronauts’ docking modules on ISS and Shuttle missions. At CMU, he led teams of researchers on several successful Defense Advanced Research Projects Agency (DARPA) initiatives related to AI in robotics. Serendipitously, Mr. Valois was a member of the research collaboration between PsychoGenics and CMU to develop the original SmartCube® discovery platform from 2003 to 2005. “We are excited to welcome Jean-Sebastien back to PsychoGenics where he will be responsible for leading our talented data team to develop the next generation of AI software to support our phenotypic drug discovery platforms and programs,” said Dr. Daniela Brunner, Chief Innovation Officer. “His industrial and academic career on the cutting-edge of artificial intelligence furthers our mission to employ our drug discovery engine to find and validate novel treatments for patients.” About PsychoGenics’ AI–driven Drug Discovery PsychoGenics’ proprietary, high-throughput platforms combine behavioral and physiological measures with developments in artificial intelligence to phenotypically discover drug candidates with potential utility across the spectrum of CNS disease indications. Using these platforms and other capabilities, PsychoGenics collects and analyzes multidimensional preclinical data of novel compounds and disease mouse model phenotypes, and employs proprietary machine learning algorithms to find new treatments. PsychoGenics has screened and optimized diverse and targeted libraries of compounds delivering thousands of hits from which numerous neuropsychiatric clinical drug candidates have emerged, the most advanced of which is SEP-363856 (Ulotaront), a novel treatment for schizophrenia now in Phase 3, discovered in partnership with Sunovion. PsychoGenics’ phenotypic drug discovery approach can significantly reduce the time and cost to reaching approved Investigational New Drug status, potentially resulting in the identification of a viable drug candidate from a few hundred analogs tested in lead optimization in just over a year. This compares favorably to most target-driven programs, which typically synthesize thousands of analogs over many years. Due to its target-agnostic nature, the approach has increased the probability of successfully finding drug candidates with novel first-in-class mechanisms of action and improved side effect profiles that are suitable for treating the symptoms of neuropsychiatric disorders. About PsychoGenics PsychoGenics Inc. and its discovery arm, PGI Drug Discovery LLC (collectively known as PsychoGenics), have pioneered the translation of rodent behavioral and physiological responses into robust, high-throughput and high-content phenotyping. PsychoGenics’ drug discovery platforms, SmartCube®, NeuroCube®, PhenoCube®, and eCube™ have been used in shared-risk partnerships with major pharma companies including Sunovion, Roche, and Karuna, resulting in the discovery of several novel compounds now in clinical trials or advanced preclinical development. PsychoGenics’ capabilities also include standard behavioral testing, electrophysiology, translational EEG, molecular biology, microdialysis, and quantitative immunohistochemistry. In addition, the company offers a variety of in-licensed transgenic mouse models that support research in areas such as Huntington’s disease, autism spectrum disorders, psychosis/schizophrenia, depression, PTSD, Alzheimer’s disease, Parkinson’s disease, muscular dystrophy, ALS, and seizure disorders. For more information on PsychoGenics Inc., visit www.psychogenics.com. Contacts Dr. Emer Leahy President & CEO (914) 406-8008 [email protected] Note/Warning: Autistic people have fought the inclusion of ABA in therapy for us since before Autism Speaks, and other non-Autistic-led autism organizations, started lobbying legislation to get it covered by insurances and Medicaid. ABA is a myth originally sold to parents that it would keep their Autistic child out of an institution. Today, parents are told that with early intervention therapy their child will either be less Autistic or no longer Autistic by elementary school, and can be mainstreamed in typical education classes. ABA is very expensive to pay out of pocket. Essentially, Autism Speaks has justified the big price tag up front will offset the overall burden on resources for an Autistic’s lifetime. The recommendation for this therapy is 40 hours a week for children and toddlers. The original study that showed the success rate of ABA to be at 50% has never been replicated. In fact, the study of ABA by United States Department of Defense was denounced as a failure. Not just once, but multiple times. Simply stated: ABA doesn’t work. In study after repeated study: ABA (conversion therapy) doesn’t work. What more recent studies do show: Autistics who experienced ABA therapy are at high risk to develop PTSD and other lifelong trauma-related conditions. Historically, the autism organizations promoting ABA as a cure or solution have silenced Autistic advocates’ opposition. ABA is also known as gay conversion therapy. The ‘cure’ for Autistics not born yet is the prevention of birth. The ‘cure’ is a choice to terminate a pregnancy based on ‘autism risk.’ The cure is abortion. This is the same ‘cure’ society has for Down Syndrome. This is eugenics 2021. Instead of killing Autistics and disabled children in gas chambers or ‘mercy killings’ like in Aktion T4, it’ll happen at the doctor’s office, quietly, one Autistic baby at a time. Different approaches yes, but still eugenics and the extinction of an entire minority group of people. Fact: You can’t cure Autistics from being Autistic. Fact: You can’t recover an Autistic from being Autistic. Fact: You can groom an Autistic to mask and hide their traits. Somewhat. … however, this comes at the expense of the Autistic child, promotes Autistic Burnout (this should not be confused with typical burnout, Autistic Burnout can kill Autistics), and places the Autistic child at high risk for PTSD and other lifelong trauma-related conditions. [Note: Autism is NOT a disease, but a neurodevelopmental difference and disability.] Fact: Vaccines Do Not Cause Autism.
https://internationalbadassactivists.org/2021/12/15/fyi-psychogenics-appoints-dr-stephen-morairty-as-vice-president-translational-neuroscience-and-jean-sebastien-valois-as-vice-president-engineering-and-ai-development-december-14-2021/
On Thursday, fell 1 cm rainfall. How many liters of water fell to rectangular garden with dimensions of 22 m x 35 m? Correct answer: Did you find an error or inaccuracy? Feel free to write us. Thank you! Thank you for submitting an example text correction or rephasing. We will review the example in a short time and work on the publish it. Tips to related online calculators Do you want to convert area units? Do you know the volume and unit volume, and want to convert volume units? Do you know the volume and unit volume, and want to convert volume units? You need to know the following knowledge to solve this word math problem: Related math problems and questions: - Rainfall How many liters of water did fell in a 32m long and 8m wide garden, if 8mm of rain fell? - Rainfall A rectangular garden of 25m in length and width 20m in width fall 4mm of water. Express by a fraction in basic form what part of the 60-hectolitre tank we would fill with this water. - Rain Garden shape of a rectangle measuring 15 m and 20 m rained water up to 3 mm. How many liters of water rained in the garden? - Water level How high is the water in the swimming pool with dimensions of 37m in length and 15m in width, if an inlet valve is opened for 10 hours flowing 12 liters of water per second? - The cuboid The cuboid is full to the brim with water. The external dimensions are 95 cm, 120 cm, and 60 cm. The thickness of all walls and the bottom is 5 cm. How many liters of water fit into the cuboid? - Ice and water We want to cover rectangular rink with dimensions of 55 m and 25 m with 4cm thick layer of ice. How many liters of water we need if after freezing water increases its volume by 10%? - Water overflow A rectangular container that has a length of 30 cm, a width of 20 cm, and a height of 24 cm is filled with water to a depth of 15 cm. When an additional 6.5 liters of water are poured into the container, some water overflows. How many liters of water over - Cuboid aquarium Cuboid 25 times 30 cm. How long is third side if cuboid contains 30 liters of water? - Water in aquarium The aquarium cuboid shape with a length of 25 cm and a width of 30 cm is 9 liters of water. Calculate the areas which are wetted with water. - Annual rainfall The average annual rainfall is 686 mm. How many liters will fall on the 1-hectare field? - Freezer The freezer has the shape of a cuboid with internal dimensions of 12 cm, 10 cm, 30 cm. A layer of ice of 23 mm thick was formed on the inner walls (and on the opening) of the freezer. How many liters of water will drain if we dispose the freezer? - Aquarium There is 15 liters of water in a block-shaped aquarium with internal dimensions of the bottom of 25 cm and 30 cm. Find the volume of water-wetted surfaces. Express the result in dm square. - Rainwater 6 mm of water rained on the garden with an area of 25 acres. How many 12 liter cans of water would we water this garden just as much? - Bricks wall There are 5000 bricks. How high wall thickness of 20 cm around the area which has dimensions 20 m and 15 m can use these bricks to build? Brick dimensions are 30 cm, 20 cm and 10 cm. - Snails How many liters of water will fit in an aquarium with bottom dimensions of 30 cm and 25 cm and a height of 60 cm, if we pour water up to a height of 58 cm? How many most snails can we keep in an aquarium, if we know that snails need 600 cm ^ 3 of water fo - Aquarium The box-shaped aquarium is 40 cm high; the bottom has dimensions of 70 cm and 50 cm. Simon wanted to create an exciting environment for the fish, so he fixed three pillars to the bottom. They all have the shape of a cuboid with a square base. The base edg - Pool coating How many tiles 25 cm × 15 cm need to coat the bottom and sidewalls of the pool with bottom dimensions 30 m × 5 m, if the pool can fit up to 271500 liters of water?
https://www.hackmath.net/en/math-problem/4663
Calculate the content of an isosceles trapezoid whose bases are at ratio 5:3, the arm is 6cm long and it is 4cm high. - Inner angles The inner angles of the triangle are 30°, 45° and 105° and its longest side is 10 cm. Calculate the length of the shortest side, write the result in cm up to two decimal places. - Water channel The cross section of the water channel is a trapezoid. The width of the bottom is 19.7 m, the water surface width is 28.5 m, the side walls have a slope of 67°30' and 61°15'. Calculate how much water flows through the channel in 5 minutes if the water flo - Type of triangle How do I find the triangle type if the angle ratio is 2:3:7 ? - Trapezoid - RR Find the area of the right angled trapezoid ABCD with the right angle at the A vertex; a = 3 dm b = 5 dm c = 6 dm d = 4 dm - Mirror How far must Paul place a mirror to see the top of the tower 12 m high? The height of Paul's eyes above the horizontal plane is 160 cm and Paul is from the tower distant 20 m. - Roof 7 The roof has the shape of a regular quadrangular pyramid with a base edge of 12 m and a height of 4 m. How many percent is folds and waste if in construction was consumed 181.4m2 of plate? - Circle and rectangle A rectangle with sides of 11.7 cm and 175 mm is described by circle. What is its length? Calculate the content area of the circle described by this circle. - Triangular prism Calculate the volume and surface of the triangular prism ABCDEF with base of a isosceles triangle. Base's height is 16 cm, leg 10 cm, base height vc = 6 cm. The prism height is 9 cm. - Thales Thales is 1 m from the hole. The eyes are 150 cm above the ground and look into the hole with a diameter of 120 cm as shown. Calculate the depth of the hole. - Display case Place a glass shelf at the height of 1m from the bottom of the display case in the cabinet. How long platter will we place at this height? The display case is a rectangular triangle with 2 m and 2.5 m legs. - Square pyramid Calculate the volume of the pyramid with the side 5cm long and with a square base, side-base has angle of 60 degrees. - Hexagon cut pyramid Calculate the volume of a regular 6-sided cut pyramid if the bottom edge is 30 cm, the top edge us 12 cm, and the side edge length is 41 cm. - Juice box The juice box has a volume of 200ml with its base is an isosceles triangle with sides a = 4,5cm and a height of 3,4cm. How tall is the box? - Angle of deviation The surface of the rotating cone is 30 cm2 (with circle base), its surface area is 20 cm2. Calculate the deviation of the side of this cone from the plane of the base. - Hexagonal prism The base of the prism is a regular hexagon consisting of six triangles with side a = 12 cm and height va = 10.4 cm. The prism height is 5 cm. Calculate the volume and surface of the prism! - Children pool The bottom of the children's pool is a regular hexagon with a = 60 cm side. The distance of opposing sides is 104 cm, the height of the pool is 45 cm. A) How many liters of water can fit into the pool? B) The pool is made of a double layer of plastic film - Isosceles trapezoid What is the height of an isosceles trapezoid, the base of which has a length of 11 cm and 8 cm and whose legs measure 2.5 cm? - Triangular prism Calculate the surface of a triangular prism 10 cm high, the base of which is a triangle with sides 6 cm 8 cm and 8 cm Do you have an interesting mathematical word problem that you can't solve it? Enter it, and we can try to solve it. See also our trigonometric triangle calculator. See also more information on Wikipedia.
https://www.hackmath.net/en/word-math-problems/triangle?page_num=13
MATH - HELP ME PLEASE!!!! A steel plate has the form of one-fourth of a circle with a radius of 42 centimeters. Two two-centimeter holes are to be drilled in the plate positioned as shown in the figure in the website. Find the coordinates of the center of - Physics A ball and a thin plate are made from different materials and have the same initial temperature. The ball does not fit through a hole in the plate, because the diameter of the ball is slightly larger than the diameter of the hole. - math Calculate the volume of a steel bar which is 8cm long and 3.5cm in diameter - science a cylinder of diameter 1cm at 30 degrees celcius is to be slid into a steel plate. the hole has a diameter 0.9997cm at 30 degrees celcius. to what temperature must the plate be heated? for steel - physics Comic-book superheroes are sometimes able to punch holes through steel walls. (a) If the ultimate shear strength of steel is taken to be 3.00 x 10^9 Pa, what force is required to punch through a steel plate 1.90 cm thick? Assume - PHYSICS CENTER OF MASS A thin rectangular plate of uniform areal density σ = 2.79 kg/m2 has length of 37.0 cm and width of 23.0 cm. The lower left hand corner is located at the origin, (x,y)= (0,0) and the length is along the x-axis. (a)There is a - math A paper cone has a base diameter of 8cm and a height of 3cm a. Calculate the volume of the cone in term of pie b. Calculate the curve surface area of the cone - Chemistry (urgent) The gold foil Rutherford used in his scattering experiment had a thickness of approximately 4×10^−3 mm. If a single gold atom has a diameter of 2.9 x10^-8cm , how many atoms thick was Rutherford's foil? Express your answer - Physics A 20mm diameter rivet is used to fastened two 25mm thick plate If the shearing stress of rivet is 80MPa. What tensile force is applied to each plate to shear the rivet - Physics 4B To make a secure fit, rivets that are larger than the rivet hole are often used and the rivet is cooled (usually in dry ice) before it is placed in the hole. A steel rivet 1.873 cm in diameter is to be placed in a hole 1.871 cm in - Math A rectangle prism has a length of 8 cm with 3 cm and height 5 cm what are the dimensions of the horizontal cross section ? A. 8 cm by 5cm B.8cm by 3cm C. 5cm by 3cm D. 8 cm by 8cm - physics A trajectile, of mass 20 g, traveling at 350 m/s, strikes a steel plate at an angle of 30-degrees with a plane of the plate. It ricochets off at the same angle, at a speed of 320 m/s. What is the magnitude of the impulse that the You can view more similar questions or ask a new question.
https://www.jiskha.com/questions/929377/a-hole-3cm-in-diameter-is-to-be-punched-out-of-a-steel-plate-8cm-thick-the-shear-stress
Cube is inscribed in the cube. Determine its volume if the edge of the cube is 10 cm long. Correct answer: We will be pleased if You send us any improvements to this math problem. Thank you! Thank you for submitting an example text correction or rephasing. We will review the example in a short time and work on the publish it. Tips to related online calculators Tip: Our volume units converter will help you with the conversion of volume units. You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: video1 Related math problems and questions: - Cube into cylinder If we dip a wooden cube into a barrel with a 40cm radius, the water will rise 10 cm. What is the size of the cube edge? - Two rectangular boxes Two rectangular boxes with dimensions of 5 cm, 8 cm, 10 cm, and 5 cm, 12 cm, 1 dm are to be replaced by a single cube box of the same cubic volume. Calculate its surface. - Pool The swimming pool is 10 m wide and 8 m long, and 153 cm deep. How many hectoliters of water are in it if the water is 30 cm below its upper edge? - For thinkings The glass cube dive into the aquarium, which has a length of 25 cm, width 20 cm and height of 30 cm. Aquarium water rises by 2 cm. a) What is the volume of a cube? b) How many centimeters measure its edge? - Cylinder in cube Into a paper box in the shape of a cube with an edge of 10 cm is placed a can in the shape of a cylinder with a height of 10 cm and touching all the walls of the cube. What % of the volume of the cube does the can take up? - Cube in a sphere The cube is inscribed in a sphere with a volume 7253 cm3. Determine the length of the edges of a cube. - Cube in sphere The sphere is an inscribed cube with an edge of 8 cm. Find the sphere's radius. - Three cubes Two cube-shaped boxes with edges a = 70 cm; b = 90 cm must be replaced by one cube-shaped box. What will be its edge? - Inscribed sphere How many % of the volume of the cube whose edge is 6 meters long is a volume of a sphere inscribed in that cube? - Cube surfce2volume Calculate the volume of the cube if its surface is 150 cm2. - Special body Above each wall of a cube with an edge a = 30 cm, a regular quadrilateral pyramid with a height of 15 cm is constructed. Find the volume of the resulting body. - Cuboids in cube How many cuboids with dimensions of 6 cm, 8 cm and 12 cm can fit into a cube with side 96 centimeters? - Cube walls Find the volume and the surface area of the cube if the area of one of its walls is 40 cm2. - Inscribed sphere How many percent of the cube volume takes the sphere inscribed into it? - Prism Calculate the surface area and volume of a prism with a body height h = 10 cm, and its base has the shape of a rhomboid with sides a = 5.8 cm, b = 3 cm, and the distance of its two longer sides is w = 2.4 cm. - Cube corners A small cube with an edge length of 2 cm was cut from each corner of a large cube with an edge length of 10 cm. How many cm3 was the body left from the big cube after cutting the small cubes? - Prism height What is the prism's height with the base of a right triangle of 6 cm and 9 cm? The diaphragm is 10.8 cm long. The volume of the prism is 58 cm3. Calculate its surface.
https://www.hackmath.net/en/math-problem/6509
How many 12 centimeter cubes fit into the block (cuboid) with 6dm, 8,4dm and 4,8? - Water flow 2 How many litres of water will flow in 7 minutes from a cylindrical pipe 1 cm in diameter, if the water flows at a speed of 30 km per hour - Rectangular prism If i have a rectangular prism with a length of 1,000 cm, width of 30 cm and a height of 50 cm, what is the volume? - Rain It rains at night. On 1 m2 of lake will drop 60 liters of water. How many cm will the lake level rise? - A residential A residential colony has a population of 5400 and 60 litres of water is required per person per day. For the effective utilization of rain water, they constructed a water reservoir measuring 48m × 27m × 25m to collect the rain water. For how many days, the - Surface of cubes Peter molded a cuboid 2 cm, 4cm, 9cm of plasticine. Then the plasticine split into two parts in a ratio 1:8. From each part made a cube. In what ratio are the surfaces of these cubes? - How much How much money will we pay for 20 planks 4m long, 15cm wide and 26mm thick when 1m³ of wood costs 4500kč? - Cube edges If the edge length of the cube increases by 50%, how does the volume of this cube increase? - Cube in a sphere The cube is inscribed in a sphere with volume 5951 cm3. Determine the length of the edges of a cube. - Axial section Axial section of the cone is equilateral triangle with area 208 dm2. Calculate volume of the cone. - Cuboid Cuboid with edge a=23 cm and body diagonal u=41 cm has volume V=13248 cm3. Calculate the length of the other edges. - Cuboid The cuboid has a surface area 7086 cm2, the length of its edges are in the ratio 4:2:1. Calculate the volume of the cuboid. - Pool The swimming pool is 10 m wide and 22 m long and 191 cm deep. How many hectoliters of water is in it, if the water is 9 cm below its upper edge. - Cone A2V Surface of cone in the plane is a circular arc with central angle of 126° and area 415 dm2. Calculate the volume of a cone. - Cubes One cube is inscribed sphere and the other one described. Calculate difference of volumes of cubes, if the difference of surfaces in 254 cm2. - Transforming cuboid Cuboid with dimensions 10 cm, 17 and 17 cm is converted into a cube with the same volume. What is its edge length? - Tereza The cube has area of base 64 mm2. Calculate the edge length, volume and area of its surface. - Sphere Surface of the sphere is 2820 cm2, weight is 71 kg. What is its density? - Cylinders Area of the side of two cylinders is same rectangle of 12 mm × 19 mm. Which cylinder has a larger volume and by how much? - Tanks Fire tank has cuboid shape with a rectangular floor measuring 13.7 m × 9.8 m. Water depth is 2.4 m. Water was pumped from the tank into barrels with a capacity of 2.7 hl. How many barrels were used, if the water level in the tank fallen 5 cm? Wr Do you have interesting mathematical problem that you can't solve it? Enter it and we can try to solve it.
https://www.hackmath.net/en/math-problems/body-volume
A 68 centimetre long rope is used to make a rhombus on the ground. The distance between a pair of opposite side corners is 16 centimetres what is the distance between the other two corners? - Hexagon cut pyramid Calculate the volume of a regular 6-sided cut pyramid if the bottom edge is 30 cm, the top edge us 12 cm, and the side edge length is 41 cm. - Right triangle from axes A line segment has its ends on the coordinate axes and forms with them a triangle of area equal to 36 sq. Units . The segment passes through the point ( 5,2). What is the slope of the line segment. ? - ABCD AC= 40cm , angle DAB=38 , angle DCB=58 , angle DBC=90 , DB is perpendicular on AC , find BD and AD - Solid cuboid A solid cuboid has a volume of 40 cm3. The cuboid has a total surface area of 100 cm squared. One edge of the cuboid has length 2 cm. Find the length of a diagonal of the cuboid. Give your answer correct to 3 sig. Fig. - Area of iso-trap Find the area of an isosceles trapezoid, if the lengths of its bases are 16 cm, and 30 cm, and the diagonals are perpendicular to each other. - Diagonals A diagonal of a rhombus is 20 cm long. If it's one side is 26 cm find the length of the other diagonal. - Diagonal he rectangular ABCD trapeze, whose AD arm is perpendicular to the AB and CD bases, has area 15cm square. Bases have lengths AB = 6cm, CD = 4cm. Calculate the length of the AC diagonal. - Embankment Perpendicular cross-section of the embankment around the lake has the shape of an isosceles trapezoid. Calculate the perpendicular cross-section, where bank is 4 m high the upper width is 7 m and the legs are 10 m long. - How far From the top of a lighthouse 145 ft above sea level, the angle of depression of a boat 29°. How far is the boat from the lighthouse? - Equation of circle 2 Find the equation of a circle which touches the axis of y at a distance 4 from the origin and cuts off an intercept of length 6 on the axis x. - Diamond diagonals Calculate the diamonds' diagonals lengths if the diamond area is 156 cm square and the side length is 13 cm. - Rhombus The rhombus has diagonal lengths of 4.2cm and 3.4cm. Calculate the length of the sides of the rhombus and its height - Lighthouse The man, 180 cm tall, walks along the seafront directly to the lighthouse. The male shadow caused by the beacon light is initially 5.4 meters long. When the man approaches the lighthouse by 90 meters, its shadow shorter by 3 meters. How tall is the lightho - A bridge A bridge over a river is in the shape of the arc of a circle with each base of the bridge at the river's edge. At the center of the river, the bridge is 10 feet above the water. At 27 feet from the edge of the river, the bridge is 9 feet above the water. H - Hypotenuse - RT A triangle has a hypotenuse of 55 and an altitude to the hypotenuse of 33. What is the area of the triangle? - Road The angle of a straight road is approximately 12 degrees. Determine the percentage of this road. - Surface area of the top A cylinder is three times as high as it is wide. The length of the cylinder’s diagonal is 20 cm. Find the surface area of the top of the cylinder. - Two forces The two forces F1 = 580N and F2 = 630N have the angle of 59 degrees. Calculate their resultant force F. - Windbreak A tree at a height of 3 meters broke in the windbreak. Its peak fell 4.5 m from the tree. How tall was the tree? Do you have an interesting mathematical problem that you can't solve it? Enter it, and we can try to solve it. See also our right triangle calculator.
https://www.hackmath.net/en/math-problems/right-triangle?page_num=25
What was the original edge length of the cube if after cutting 39 small cubes with an edge length 2 dm left 200 dm3? Result Result Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this example are needed these knowledge from mathematics: Next similar examples: - For thinkings The glass cube dive into the aquarium, which has a length of 25 cm, width 20 cm and height of 30 cm. Aquarium water rises by 2 cm. a) What is the volume of a cube? b) How many centimeters measure its edge? - Granite cube What is the weight in kg granite cube with an edge of 0.5 m if 1dm3 of granite weight 2600 g? - Cube The sum of lengths of cube edges is 69 cm. What is its surface and volume? - Cube basics How long is the edge length of a cube with volume 23 m3? - Cube and water How many liters of water can fit into a cube with an edge length of 0.11 m? - Cube edges If the edge length of the cube increases by 50%, how does the volume of this cube increase? - Two boxes-cubes Two boxes cube with edges a=38 cm and b = 81 cm is to be replaced by one cube-shaped box (same overall volume). How long will be its edge? - Cylinder - area The diameter of the cylinder is one-third the length of the height of the cylinder. Calculate the surface of cylinder if its volume is 2 m3. - Cube The cube weighs 30 kg. How weight is cube of the same material, if its dimensions are 2-times smaller? - Cuboids in cube How many cuboids with dimensions of 6 cm, 8 cm and 12 cm can fit into a cube with side 96 centimeters? - Hollow sphere Steel hollow sphere floats on the water plunged into half its volume. Determine the outer radius of the sphere and wall thickness, if you know that the weight of the sphere is 0.5 kg and density of steel is 7850 kg/m3 - Equilateral cylinder Equilateral cylinder (height = base diameter; h = 2r) has a volume of V = 168 cm3 . Calculate the surface area of the cylinder. - Iron sphere Iron sphere has weight 100 kg and density ρ = 7600 kg/m3. Calculate the volume, surface and diameter of the sphere. - Surface of the cylinder Calculate the surface area of the cylinder when its volume is 45 l and the perimeter of base is three times of the height. - Cube in ball Cube is inscribed into sphere of radius 241 cm. How many percent is the volume of cube of the volume of sphere? - Sphere Surface of the sphere is 2820 cm2, weight is 71 kg. What is its density? - Hollow sphere Calculate the weight of a hollow tungsten sphere (density 19.3 g/cm3), if the inner diameter is 14 cm and wall thickness is 3 mm.
https://www.hackmath.net/en/example/2306
Civil Engineering Interview Questions and Answers Q.1 What the steps involved in Building Construction? Ans: There are different steps involved in Building construction like, - Concreting - Masonry work - Plastering work - Flooring work - Formwork - Steel cutting and Bending Q.2 How do you measure the volume of concrete? Ans: The volume of concrete is calculated by Multiplying its Length, Width, and Thickness together. For Example – 1m x1m x1m = 1 m³ of the volume of concrete. Q.3 Why Concrete Cover is provided to reinforcement? Ans: Concrete cover for reinforcement is required to protect the rebar against corrosion and to provide resistance against fire. Q.4 How to do check level on construction site? Ans: I will check the level on the construction site by Spirit level, Dumpy Level, and Leveling Pipe. Q.5 What is the accuracy of the dumpy level or minimum reading we can take? Ans: With the help of a dumpy level we can take up 5mm accurate reading or minimum reading. Q.6 How do you calculate the weight of 12m long and 10mm dia. Steel on-site? Ans: its simple, By multiplying the length of steel bar with its unit weight (unit wt of 10mm = 0.60 kg/m) Weight of steel = 0.60x 12 = 7.2 kg Q.7 Which is the equation used for calculating the unit weight of the steel bar? Ans: (D²/162) Q.8 What is the size of a concrete cube? Ans: 15 cm x 15 cm x 15 cm Q.9 What do you do if any concrete cube fails in 28 days compressive strength test? Ans: If the concrete cube fails in strength test, I will conduct a core cutter test on concrete and send a report to higher authorities. Q. 10 What is the mix ratio for M – 20 Grade of concrete? Ans: 1: 1.5: 3 Q. 11 What is the Unit weight of 12 mm Steel Bars. Ans: 0.89 kg/m Q. 12 What is the Density of Steel? Ans: 7850 kg/m³ Q. 13 In Fe – 415 Steel Grade, 415 indicates the___________of Steel. Ans: Tensile Strength Q. 14 What is the Volume of 50 kg bag of cement? Ans: 0.035 m³ Q. 15 In Residential Building, Average Value of Stair Width? Ans: 900 mm Q. 16 The Slope of Stair Should not Exceed. Ans: 40º Q. 17 Minimum diameter of steel in Column. Ans: 12 mm Q. 18 Standard Size of Brick? Ans: 19 cm x 9 cm x 9 cm Q. 19 What is Unit Weight of RCC? Ans: 2500 kg/ m³ Q. 20 One Acre = ____________Sq. ft. Ans: 43560 Sq. ft. Q. 21 What is the Full Form of UTM? Ans: Universal Testing Machine Q. 22 Cement Expire After? Ans: 3 month Q. 23 One square meter = _________ Sq. ft? Ans: 10.76 Sq. ft Learn More Civil Site Engineer Must Know +18 Civil Engineer Tips And Trick Q. 24 What is the unit weight of 25 mm Steel Bars Ans: 3.85 kg/m Q. 25 One Hectare = _______Acres Ans: 2.47 Acres Q. 26 One Gallon = ________Liters Ans: 3.78 Liters Q. 27 One kilonewton is equal to _________ kilograms Ans: 101.97 KG Q. 28 One Tonne is equal to _________ kilograms Ans: 1000 KG Q. 29 Maximum Free fall of concrete allowed is______? Ans: 1.5 m Q. 30 Instrument used for level work on a construction site? Ans: Dumpy Level Q. 31 Minimum Bars in Circular Column Should be_______ Ans: 6 Nos. Q. 32 What is the Full Form of AAC? Ans: Autoclaved Aerated Concrete Q. 33 What is the Full Form of NDT? Ans: Non – Destructive Test Q. 34 What is the Full Form of JCB? Ans: Joseph Cyril Bamford Q. 35 Which Test is conducted to determine the bearing capacity of Soil? Ans: Plate Load Test Q. 36 Ring and ball test are conducted on which construction material? Ans: Bitumen Q. 37 Minimum hook length as per IS Code? Ans: 75 mm Q. 38 What is the extra length in Bent up bars? Ans: 0.45 X Q. 39 What is Least Count of Dumpy? Ans: 5mm Q. 40 What is Full of EGL? Ans: Existing ground level. Q.41 A First Class Brick Should Absorb Water More than? Ans: 20 % Q.42 Number of Bricks used in 1 Cubic meter of Brickwork? Ans: 500 Nos. Q.43 The Normal Consistency of Portland Cement? Ans: 25 % Q.44 The Expansion in Portland cement is tested by… Ans: Soundness Test Q.45 According to IS Code, Full Strength of Concrete is achieved after? Ans: 28 Days Q.46 What is the Volume of 1 bag of cement? Ans: 0.035 m³ Q.47 Minimum Grade of Concrete Used For RCC? Ans: M – 20 Q.48 Cement Expire After? Ans: 3 month Q.49 What is the Full Form of DPR? Ans: Detailed Project Report Q.50 What is the initial and final setting time for cement? Ans: Initial: Less than 30 min and 600 min. Learn More - -Concrete Test: Slump test, compression test, split tensile test, soundness etc. - -Soil Test: Core cutter test, compaction test, sand replacement test, triaxial test, consolidation test etc. - -Bitumen Test: Ductility test, softening point test, gravity test, penetration test etc.
https://rajajunaidiqbal.com/civil-engineering-interview-questions-and-answers/
Oak timber is rectangular shaped with dimensions of 2m, 30 cm and 15 cm. It weight is 70 kg. Calculate the weight 1 dm³ of timber. Correct result: Correct result: Thank you for submitting an example text correction or rephasing. We will review the example in a short time and work on the publish it. Showing 0 comments: Tips to related online calculators Do you know the volume and unit volume, and want to convert volume units? Tip: Our Density units converter will help you with the conversion of density units. Do you want to convert mass units? Tip: Our Density units converter will help you with the conversion of density units. Do you want to convert mass units? You need to know the following knowledge to solve this word math problem: Related math problems and questions: - The body The body has dimensions of 2m 2dm and 10 cm. It weighs 28 kg. What is its density? - Brick wall What is the weight of a solid brick wall that is 30 cm wide, 4 m long and 2 m high? The density of the brick is 1500 kg per cubic meter. - Cuboid 5 Calculate the mass of the cuboid with dimensions of 12 cm; 0.8 dm and 100 mm made from spruce wood (density = 550 kg/m3). - Wood lumber Wooden lumber is 4 m long and has a cross section square with side 15 cm. Calculate: a) the volume of lumber b) the weight of the lumber if 1 m3 weighs 790 kg - Wood cuboid What is the weight of the wood cuboid 15 cm, 20 cm, 3 m if 1 m3 wood weighs 800 kg? - Density of the concrete Find the density of the concrete of the cuboid-shaped column has dimensions of 20 x 20 cm x 2 m if the weight of the column is 200 kg. - Canister Gasoline is stored in a cuboid canister having dimensions 44.5 cm, 30 cm, 16 cm. What is the total weight of a full canister when one cubic meter of gasoline weighs 710 kg and the weight of empty canister is 1.5 kg? - Paper box Calculate whether 11 dm² of paper is sufficient for gluing a box without a lid with bottom dimensions of 2 dm and 15 cm and 12 cm high. Write result as: 0 = No, 1 = Yes - The glass 1 m3 of glass weighs 2600 kg. Calculate the weight of the glass glazing panel with dimensions of 2.5 m and 3.8 m if the thickness of the glass is 0.8 cm - Glass door What is the weight of glass door panel 5 mm thick height 2.1 meters and a width of 65 cm and 1 cubic dm of glass weighs 2.5 kg? - Air mass What is the weight of the air in a classroom with dimensions 10 m × 10 m × 2.7 m ? The air density is 1.293 kg/m3. - Oak trunk Calculate in tonnes the approximate weight of a cylindrical oak trunk with a diameter of 66 cm and a length of 4 m, knowing that the density of the wood was 800 kg/m³. - Bricks Openings in perforated bricks occupy 10% and brick has dimensions 30 cm, 15 cm and 7.5 cm. Calculate a) the weight of a perforated bricks, if you know that the density of the full brick material is p = 1800 kg/m3 (1.8 kg/dm3) b) the number of perforated - Liter of gold What weight does 1 dm3 of gold have? Gold density is 19,300 kg / m3 - Copper wire What is the weight in kg of copper wire 200 m long with a diameter of 0.6 cm, if the density of copper is 8.8 kg? - Steel tube The steel tube has an inner diameter of 4 cm and an outer diameter of 4.8 cm. The density of the steel is 7800 kg/m3. Calculate its length if it weighs 15 kg. - Iron pole What is the mass of pole with the shape of a regular quadrilateral prism with a length of 1 m and a cross-sectional side length of a = 4.5 cm make from iron with density ρ = 7800 kg/m³?
https://www.hackmath.net/en/math-problem/4889
Calculate the volume and surface area of a regular tetrahedral pyramid, its height is $b cm and the length of the edges of the base is 6 cm. - Parallelogram The perimeter of the parallelogram is 417 cm. The length of one side is 1.7-times longer than the length of the shorter side. What is the length of sides of a parallelogram? - Copper plate Copper plate length 3.2 m and width 50 cm weighs 55.42 kg. What is the plate thick, if 1 m3 of copper weighs about 8700 kg? - Two diggers Two diggers should dig a ditch. If each of them worked just one-third of the time that the other digger needs, they'd dig up a 13/18 ditch together. Find the ratio of the performance of this two diggers. - Hurry - rush At an average speed 7 km/h I will come from the school to the bus stop for 30 minutes. How fast I need to go if I need to get to the bus stop in 21 minutes? - Bomber Bomber flies 10 km at 600 km/h. At what horizontal distance from the target, must pilot drop the bomb to hit the target? Don't care about air resistance and consider the gravitational acceleration g=9.81 m/s2. - A particle A particle moves in a straight line so that its velocity (m/s) at time t seconds is given by v(t) = 3t2-4t-4, t>0. Initially the particle is 8 meters to the right of a fixed origin. After how many seconds is the particle at the origin? - Lateral surface area The ratio of the area of the base of the rotary cone to its lateral surface area is 3: 5. Calculate the surface and volume of the cone, if its height v = 4 cm. - Digging companies Company A would dig a pit for 12 days, company B for 15 days, company C for 20 days and the company D for 24 days. Work began together companies C and D, but after three days joined them other two companies. How long took to dig a pit? - Glass Peter broke the window glass with size 110 cm and 90 cm. 1 square meter of glass costs 11 USD. How much money is need to pay for a new glass? - Working alone Tom and Chandri are doing household chores. Chandri can do the work twice as fast as Tom. If they work together, they can finish the work in 5 hours. How long does it take Tom working alone to do the same work? - Reservoir + water Reservoir completely filled with water weighs 12 kg. After pouring off three quarters of the amount of water weights 3 kg. Calculate the weight and volume of the reservoir. - Vertical rod The vertical one meter long rod casts a shadow 150 cm long. Calculate the height of a column whose shadow is 36 m long at the same time. - Water level To cuoid shaped poll, bottom size 2m and 3.5m; flows water at a rate of 50 liters per minute. How long will take to water reach level 50 cm? - Bonbons Create a mixture of 50 kg of candy on price 700Kč. Candies has prices: 820Kč, 660Kč and 580Kč. Use cross rule. - In and out The empty tank is filled in 12 minutes and empty in 16 minutes. How long does it take to fill if we forgot to close the drain? If the tank has 1000 l. - Apartments Apartment on the first floor was 10% more expensive than the same apartment on the second floor. The difference was 105 Kc annually. Calculate the annual rent of the apartment in the first floor and from apartment on the second floor. - Cuboid The volume of the cuboid is 245 cm3. Each cuboid edge length can be expressed by a integer greater than 1 cm. What is the surface area of the cuboid? - Aquarium Find how many dm2 of glass we need to make a block-shaped aquarium (the top is not covered) if the dimensions of the aquarium are to be: width 50 cm, length 120 cm, and height 8.5 dm. - Map On the tourist map at a scale of 1 : 50000 is distance between two points along a straight road 3.7 cm. How long travels this distance on a bike at 30 km/h? Time express in minutes. Do you have an interesting mathematical word problem that you can't solve it? Submit a math problem, and we can try to solve it.
https://www.hackmath.net/en/word-math-problems/units?page_num=114
The circle with a diameter 17 cm, upper chord /CD/ = 10.2 cm and bottom chord /EF/ = 7.5 cm. The midpoints of the chords H, G is that /EH/ = 1/2 /EF/ and /CG/ = 1/2 /CD/. Determine the distance between the G and H, if CD II EF (parallel). Result Result Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this example are needed these knowledge from mathematics: Next similar examples: - Common chord Two circles with radius 17 cm and 20 cm are intersect at two points. Its common chord is long 27 cm. What is the distance of the centers of these circles? - Regular octagon Draw the regular octagon ABCDEFGH inscribed with the circle k (S; r = 2.5 cm). Select point S' so that |SS'| = 4.5 cm. Draw S (S '): ABCDEFGH - A'B'C'D'E'F'G'H'. - Chord It is given to a circle k(r=6 cm) and the points A, B such that / AB / = 8 cm lies on k. Calculate the distance of the center of circle S to the midpoint C of the segment AB. - The chord Calculate a chord length which the distance from the center of the circle (S, 6 cm) equals 3 cm. - Chord 5 It is given circle k / S; 5 cm /. Its chord MN is 3 cm away from the center of the circle . Calculate its length. - Chord 3 What is the radius of the circle where the chord is 2/3 of the radius from the center and has a length of 10 cm? - Chord distance The circle k (S, 6 cm), calculate the chord distance from the center circle S when the length of the chord is t = 10 cm. - Ace The length of segment AB is 24 cm and the point M and N divided it into thirds. Calculate the circumference and area of this shape. - Circular lawn Around a circular lawn area is 2 m wide sidewalk. The outer edge of the sidewalk is curb whose width is 2 m. Curbstone and the inner side of the sidewalk together form a concentric circles. Calculate the area of the circular lawn and the result round to 1 - Slope Find the slope of the line: x=t and y=1+t. - Equations Solve following system of equations: 6(x+7)+4(y-5)=12 2(x+y)-3(-2x+4y)=-44 - The ball The ball was discounted by 10 percent and then again by 30 percent. How many percent of the original price is now? - Percentages 52 is what percent of 93? - Three brothers The three brothers have a total of 42 years. Jan is five years younger than Peter and Peter is 2 years younger than Michael. How many years has each of them? - Line It is true that the lines that do not intersect are parallel? - Mushrooms Eva and Jane collected 114 mushrooms together. Eve found twice as much as Jane. How many mushrooms found each of them? - Liters od milk The cylinder-shaped container contains 80 liters of milk. Milk level is 45 cm. How much milk will in the container, if level raise to height 72 cm?
https://www.hackmath.net/en/example/2628
Meteorologists study climate processes, measure and predict weather patterns and provide consultancy services to a variety of weather information users. They work out models for weather forecasting, develop instruments to collect meteorological data and compile statistics and databases. Other titles The following job titles also refer to meteorologist: atmospheric research scientist meteorologists atmospheric science researcher atmospheric researcher atmospheric analyst meteorology research analyst atmospheric scientist marine meteorologist meteorology research scientist meteorology analyst meteorology researcher atmospheric research analyst meteorology science researcher meteorology scientist Minimum qualifications Bachelor’s degree is generally required to work as meteorologist. However, this requirement may differ in some countries. ISCO skill level ISCO skill level is defined as a function of the complexity and range of tasks and duties to be performed in an occupation. It is measured on a scale from 1 to 4, with 1 the lowest level and 4 the highest, by considering: - the nature of the work performed in an occupation in relation to the characteristic tasks and duties - the level of formal education required for competent performance of the tasks and duties involved and - the amount of informal on-the-job training and/or previous experience in a related occupation required for competent performance of these tasks and duties. Meteorologist is a Skill level 4 occupation. Meteorologist career path Similar occupations These occupations, although different, require a lot of knowledge and skills similar to meteorologist. climatologist oceanographer weather forecaster seismologist geographer Long term prospects These occupations require some skills and knowledge of meteorologist. They also require other skills and knowledge, but at a higher ISCO skill level, meaning these occupations are accessible from a position of meteorologist with a significant experience and/or extensive training. Essential knowledge and skills Essential knowledge This knowledge should be acquired through learning to fulfill the role of meteorologist. Mathematics: Mathematics is the study of topics such as quantity, structure, space, and change. It involves the identification of patterns and formulating new conjectures based on them. Mathematicians strive to prove the truth or falsity of these conjectures. There are many fields of mathematics, some of which are widely used for practical applications. Meteorology: The scientific field of study that examines the atmosphere, atmospheric phenomena, and atmospheric effects on our weather. Climatology: The scientific field of study that deals with researching average weather conditions over a specified period of time and how they affected nature on Earth. Essential skills and competences These skills are necessary for the role of meteorologist. Use specialised computer models for weather forecasting: Make short-term and long-term weather forecasts applying physical and mathematical formulae; understand specialised computer modelling applications. Use meteorological tools to forecast meteorological conditions: Use meteorological data and tools such as weather facsimile machines, weather charts and computer terminals, to anticipate weather conditions. Execute analytical mathematical calculations: Apply mathematical methods and make use of calculation technologies in order to perform analyses and devise solutions to specific problems. Apply statistical analysis techniques: Use models (descriptive or inferential statistics) and techniques (data mining or machine learning) for statistical analysis and ICT tools to analyse data, uncover correlations and forecast trends. Perform scientific research: Gain, correct or improve knowledge about phenomena by using scientific methods and techniques, based on empirical or measurable observations. Review meteorological forecast data: Revise estimated meteorological parameters; solve gaps between real-time conditions and estimated conditions. Apply scientific methods: Apply scientific methods and techniques to investigate phenomena, by acquiring new knowledge or correcting and integrating previous knowledge. Carry out meteorological research: Participate in research activities on weather-related conditions and phenomena; study the physical and chemical characteristics and processes of the atmosphere; present research results in scientific journals. Optional knowledge and skills Optional knowledge This knowledge is sometimes, but not always, required for the role of meteorologist. However, mastering this knowledge allows you to have more opportunities for career development. Geographic information systems: The tools involved in geographical mapping and positioning, such as GPS (global positioning systems), GIS (geographical information systems), and RS (remote sensing). Oceanography: The scientific discipline that studies oceanic phenomena such as marine organisms, plate tectonics, and the geology of the ocean bottom. Statistics: The study of statistical theory, methods and practices such as collection, organisation, analysis, interpretation and presentation of data. It deals with all aspects of data including the planning of data collection in terms of the design of surveys and experiments in order to forecast and plan work-related activities. Scientific research methodology: The theoretical methodology used in scientific research involving doing background research, constructing an hypothesis, testing it, analysing data and concluding the results. Optional skills and competences These skills and competences are sometimes, but not always, required for the role of meteorologist. However, mastering these skills and competences allows you to have more opportunities for career development. Operate remote sensing equipment: Set up and operate remote sensing equipment such as radars, telescopes, and aerial cameras in order to obtain information about Earth’s surface and atmosphere. Collect weather-related data: Gather data from satellites, radars, remote sensors, and weather stations in order to obtain information about weather conditions and phenomena. Use geographic information systems: Work with computer data systems such as Geographic Information Systems (GIS). Conduct research on climate processes: Conduct research on the characteristic events occurring in the atmosphere during the interactions and transformations of various atmospheric components and conditions. Operate meteorological instruments: Operate equipment for measuring weather conditions, such as thermometers, anemometers, and rain gauges. Develop models for weather forecast: Develop mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Manage meteorological database: Develop and maintain meteorological databases. Add information after each new observation. Study aerial photos: Use aerial photos to study phenomena on Earth’s surface. Calibrate electronic instruments: Correct and adjust the reliability of an electronic instrument by measuring output and comparing results with the data of a reference device or a set of standardised results. This is done in regular intervals which are set by the manufacturer and using calibration devices. Write scientific papers: Present the hypothesis, findings, and conclusions of your scientific research in your field of expertise in a professional publication. Assist scientific research: Assist engineers or scientists with conducting experiments, performing analysis, developing new products or processes, constructing theory, and quality control. Create weather maps: Make graphic weather maps for specific areas containing information such as temperature, air pressure, and rain belts. Write weather briefing: Present various information such as air pressure, temperature and humidity to customers in the form of a weather brief. Design graphics: Apply a variety of visual techniques in order to design graphic material. Combine graphical elements to communicate concepts and ideas. Design scientific equipment: Design new equipment or adapt existing equipment to aid scientists in gathering and analysing data and samples. Present during live broadcasts: Present live on political, economic, cultural, social, international or sport events, or host a live broadcast program.
https://jinn.careers/wiki/meteorologist/
In our everyday life we interact with various information media, which present us with facts and opinions, supported with some evidence, based, usually, on condensed information extracted from data. It is common to communicate such condensed information in a visual form – a static or animated, preferably interactive, visualisation. For example, when we watch familiar weather programs on the TV, landscapes with cloud, rain and sun icons and numbers next to them quickly allow us to build a picture about the predicted weather pattern in a region. Playing sequences of such visualisations will easily communicate the dynamics of the weather pattern, based on the large amount of data collected by many thousands of climate sensors and monitors scattered across the globe and on weather satellites. These pictures are fine when one watches the weather on Friday to plan what to do on Sunday – after all if the patterns are wrong there are always alternative ways of enjoying a holiday. Professional decision making would be a rather different scenario. It will require weather forecasts at a high level of granularity and precision, and in real-time. Such requirements translate into requirements for high volume data collection, processing, mining, modelling and communicating the models quickly to the decision makers. Further, the requirements translate into high-performance computing with integrated efficient interactive visualisation. From practical point of view, if a weather pattern can not be depicted fast enough, then it has no value. Recognising the power of the human visual perception system and pattern recognition skills adds another twist to the requirements – data manipulations need to be completed at least an order of magnitude faster than real-time in order to combine them with a variety of highly interactive visualisations, allowing easy remapping of data attributes to the features of the visual metaphor, used to present the data. In this few steps in the weather domain, we have specified some requirements towards a visual data mining system. KeywordsData Mining Association Rule Chronic Fatigue Syndrome Visual Representation Visualisation Technique Preview Unable to display preview. Download preview PDF.
https://rd.springer.com/chapter/10.1007/978-3-540-71080-6_1
Defect-free metal additive manufacturing Additive manufacturing (AM) of metal products is achieved by selective laser melting (SLM) at the surface of a metal powder bed. SLM locally fuses the powder to construct a thin object layers (~30μm). The object then sinks in the bed and a wiper recoats the top of the object with an equally thin layer of fresh powder. The laser then adds another layer and the process is repeated until the object is fully formed. To fulfill the need for quality assurance, without tedious post-process quality control, a prediction model using process monitoring data to uncover defects is developed in this use-case. Even though the process is fast, sensing is the easy step for in-process monitoring. The difficulty lies in the development of algorithms to detect, predict, and ultimately prevent defects. Handling massive amounts of data in combination with real-time processing forms a technical challenge, as does the correct interpretation of measured data in predicting the presence of defects and their location. The ambition of CoE RAISE for the AM use case is to upgrade machine learning models to predict defects from shallow to deep models, and to upgrade training data to multiple modalities (sensor fusion) and by at least an order of magnitude in volume. Thereby, the error rate in keyhole porosity prediction is reduced by a factor of two or more and the robustness of the model for different manufacturing parameters is increased. This enables quality certification for porosity count based on build-time measurements only, eliminating a key barrier for increasing AM industrial output of high-value products. The data and model are analyzed to investigate more thoroughly the manufacturing parameter space, and to distill the model into a smaller, real-time capable model. The distilled model supports the real-time control of the process, considerably reducing the defect rate. Palabos The Palabos library is a framework for general-purpose computational fluid dynamics (CFD), with a kernel based on the lattice Boltzmann (LB) method. It is used both as a research and an engineering tool: its programming interface is straightforward and makes it possible to set up fluid flow simulations with relative ease, or, if you are knowledgeable of the lattice Boltzmann method, to extend the library with your own models. Palabos stands for Parallel Lattice Boltzmann Solver. The library’s native programming interface in written in C++. It has practically no external dependencies (only Posix and MPI) and is therefore extremely easy to deploy on various platforms. Additional programming interfaces are available for the Python and Java programming languages, which make it easier to rapidly prototype and develop CFD applications. There exists currently no graphical user interface; some amount of programming is therefore necessary in order to get an application running. Virtual Assay A user-friendly software to perform in silico drug trials in population of human cardiac models contributing to the uptake of in silico modelling and simulations in industry and regulatory paradigms, and demonstrating accurate and mechanistic predictions of drug-induced cardiac pro-arrhythmic toxicity. ACUTE Lab Sound engineering, in more detail, refers to acoustic and tactile Engineering (ACUTE) being driven forward by the Simulation and Data Lab (SDL) ACUTE in Iceland in collaboration with FZJ in RAISE. There is an essential element of ACUTE in individual 3D spatial auditory displays for immersive virtual environments. 3D sound technologies can provide accurate information about the relationship between a sound source and the surrounding environment, including the listener herself/ himself. This information cannot be substituted by any other modality (e.g. visual or tactile). Nevertheless, today’s spatial representation of audio tends to be simplistic and with poor interaction capabilities, being multimodal systems primarily focused on graphics processing and integrated with basic audio solutions. This use case in RAISE aims to convey environmental information via acoustics using binaural sounds (3D). Typically, binaural audio technologies rely on head-related transfer functions (HRTFs), specific digital filters that capture the human head’s acoustic effects. Obtaining personal HRTF data is only possible with expensive equipment and invasive recording procedures. Urban Air Pollution Pilot Use Case The vision of HiDALGO’s urban air pollution application is to create cleaner air in cities by using high performance computing (HPC) and mathematical technologies. To this end, the project will provide policy makers and society with an easy-to-use computational tool as a service that accurately and quickly forecasts air pollution in cities with very high resolution. Furthermore, a traffic control system will be developed as well to minimize air pollution while considering traffic flow constraints. The main part of the project is a HPC-framework for simulating the air flow in cities by taking into account real 3D geographical information of the city, applying highly accurate computational fluid dynamics (CFD) simulation on a highly resolved mesh (1-2 m resolution at street level) and using weather forecasts and reanalysis data as boundary conditions. Emission is computed from the weakly coupled traffic simulations and general emission data of other sources. For the demonstration area, the city of Győr, Hungary, a traffic monitoring sensor network with a plate recognition camera system will be developed. The monitoring system will be completed with affordable air quality sensors as well. Migration Pilot Use Case In the last few years, a huge amount of people were forced to leave their homes. One of the major issues hereby is to forecast where these displaced people will arrive, which would allow decision makers and NGOs to allocate humanitarian resources accordingly. To predict possible destinations of refugees coming from conflict regions, we have developed a simulation framework. This framework relies on agent-based simulations, and makes use of real world data from UNHCR, ACLED, and Bing Maps. Applying this simulation framework to three major African conflict regions, we obtain results which consistently predict more than 75% of the refugee destinations correctly after 12 days. In HiDALGO the main goal is to improve our agent-based simulation framework in terms of accuracy, resolution, clarity, and performance, and to incorporate a range of relevant phenomena in its computations. For instance, we are developing models that incorporate precipitation data from ECMWF, and plan to exploit telecommunications data from MOONSTAR to help validate our simulations. In addition, we are establishing new techniques to speed up the construction of our simulations (e.g., by automatically extracting and converting geographical data) and to establish better ways to visually explore our simulation output. Our final goal in HiDALGO is to enable simulations on a large scale to accurately forecast where displaced people, coming from various conflict regions of the world, will eventually arrive to find safety. Our approach could assist in case of a global crisis in a number of crucial ways. Firstly, it could forecast refugee movements when a conflict erupts. Secondly, it could acquire approximate refugee population estimates in regions where existing data is missing or incomplete. Finally, it could investigate how border closures and other policy decisions are likely to affect the movements and destinations of refugees. HYPERstreamHS HYPERstreamHS inherits the core features of the HYPERstream routing scheme recently presented in the work from Piccolroaz et al. (2016), while improving it by means of a dual-layer MPI framework and the inclusion of explicit modelling of streamflow alterations due to Human Systems (hence, the HS suffix to the model’s name). HYPERstream is a multi-scale streamflow routing method based on the Width Function Instantaneous Unit Hydrograph (WFIUH) approach; this approach has been specifically designed for reliably simulating the relevant horizontal hydrological fluxes preserving the geomorphological dispersion of fluxes and thus being able to perform well at different scales, from a single catchment to the meso-scale SHEMAT-Suite SHEMAT-Suite is a finite-difference open-source code for simulating coupled flow, heat and species transport in porous media. The code, written in Fortran-95, originates from geoscientific research in the fields of geothermics and hydrogeology. It comprises: (1) a versatile handling of input and output, (2) a modular framework for subsurface parameter modeling, (3) a multi-level OpenMP parallelization, (4) parameter estimation and data assimilation by stochastic approaches (Monte Carlo, Ensemble Kalman filter) and by deterministic Bayesian approaches based on automatic differentiation for calculating exact (truncation error-free) derivatives of the forward code. ParFlow ParFlow is known as a numerical model that simulates the hydrologic cycle from the bedrock to the top of the plant canopy. The original codebase provides an embedded Domain-Specific Language (eDSL) for generic numerical implementations with support for supercomputer environments (distributed memory parallelism), on top of which the hydrologic numerical core has been built. In ParFlow, the newly developed optional GPU acceleration is built directly into the eDSL headers such that, ideally, parallelizing all loops in a single source file requires only a new header file.
https://www.hpccoe.eu/category/software/
We respect your right to personal privacy. We do not rent, sell or share personal information about you to anyone. Period. We collect and use information gathered through our Web site solely for the purposes of communicating with you, if you so choose, and improving the content of our Web site. Your information is never shared with other organizations for commercial purposes. As is true of most Web sites, we gather certain information automatically and store it in log files. This information includes, but is not limited to, Internet Protocol (IP) addresses, browser type, Internet service provider (ISP), referring/exit pages, operating system, date/time stamp, and clickstream data. We use this information, which does not identify individual users, to analyze trends, to administer the site, to track users' movements around the site and to gather demographic information about our user base as a whole. We do not link this automatically collected data to personally identifiable information. The security of your personal information is important to us. We follow generally accepted industry standards to protect the personal information submitted to us, both during transmission and once we receive it. Legal Disclaimer: We reserve the right to disclose your personally identifiable information as required by law and when we believe that disclosure is necessary to protect our rights and/or to comply with a judicial proceeding, court order, or legal process served on our Web site.
https://artoftheprank.com/2007/03/29/privacy-policy/
Hazard Space Weather Geoscience Australia’s Community Safety team supports Australia’s ability to manage the impact of space weather and helps inform decisions about risk. We contribute to each stage of the emergency management cycle to help improve preparedness, response and recovery with our focus on contributing towards community safety. Our role We maintain a network of geomagnetic observatories in the Australian and Australian Antarctic region which forms part of a global observatory network. The network monitors changes in the Earth's magnetic field due to geophysical processes beneath the Earth's surface and activity in the upper atmosphere, and the Earth-Sun space environment. Our magnetic field and geomagnetically-induced current modelling are used for navigation, resource exploration and to mitigate against potential hazards generated by magnetic storms. Our research into crustal conductivity and associated geo-electric hazard contributes to assist the power distribution industry to better understand the impact of geomagnetic activity on electrical distribution networks. To help understand what could be at threat, we provide exposure information about buildings, demographics, community infrastructure and agricultural commodities. We also provide around the clock access to data about people, property and infrastructure potentially exposed during an event. Our information aids in providing an understanding of the situation to targeted preparedness, response and recovery efforts. Our products, tools and data provide a better understanding of geomagnetic hazard vulnerabilities to plan, prepare and reduce exposure to natural hazards improving preparedness now and into the future. Emerging capabilities We engage with different sectors to provide assessments and address new challenges as they emerge. - We are expanding our geomagnetic monitoring capabilities to establish baseline measurements and to better predict geomagnetically induced currents, that will enable us to produce more reliable space weather hazard assessments across Australia. - Our Researchers are committed to growing our knowledge base to prepare for challenges of the future. - We have a strong attendance at conferences and events to maintain awareness of the challenges faced by the community safety sector and keep up to date with latest science. - Our open-source data is accessed by developers interested in solving operational challenges of reducing vulnerability and exposure to hazards. Top products Geomagnetic Data Access to plots of real-time data from magnetic observatories in the Australian observatory network. Australian Geomagnetic Reference Field (AGRF) Values Calculate geomagnetic field values in Australia. Minute Values Request Form Request minute values from one observatory in the Australian Geomagnetic Observatory Network for a specified time period. Outputs are available as a plot or as a data file. Case study Analysing historic geomagnetic data to identify space weather hazards How historic data recorded by Geoscience Australia’s geomagnetic observatories is influencing Australian electricity infrastructure decision making. Australian magnetic-field monitoring is providing an invaluable dataset for safeguarding Australian infrastructure and industry.
https://www.community-safety.ga.gov.au/hazards/space-weather
Data Science vs Statistics – What Makes Them Different? In this article Data Science vs Statistics is a topic which demands all our attention. Why? Because data is the biggest resource and revenue in the current business ecosystem. By collecting data, companies in every industry have demonstrated the ability to analyse it to generate business insights which help them improve their overall performance and growth. So how is it done? – The answer is Data Science. It is a branch of science that uses data to predict future business trends. It collects, analyses and visualises data with the help of advanced mathematics and statistics. In this article, we will talk about the interesting concept of Data science and statistics. Data Science vs Statistics: How are They Different? Before we try to dive deep into understanding the differences between data science and statistics, let us understand what is data science and what is statistics. What is Data Science? Data science is a detailed process it encapsulates four different steps – - Data Architecture - Acquisition - Analysis and - Archival In the whole process of deriving results, data science uses advanced techniques such as mathematics and statistics to model data for deep analysis. Due to this level of process personalisation, it is capable of effortlessly solving all real-time issues with complete efficiency. Read more about how data science can solve real-time problems in our blog. A Brief Look into Statistic Definition Data science heavily relies on a few methods to derive results and statistics is one of them. Statistics deals with the study of data and its applications. Using it, data scientists are able to gather data, analyse it and interpret results to predict future trends. By taking numerical and categorical data as inputs, it processes the information and interprets the data for scientists to assist them in decision making. Statistics in data science is further divided into two types – Descriptive Statistics and Inferential statistics. While both of these use similar statistical measures, the way they operate and the goals they are used to achieve are entirely different. Read our blog on Descriptive Statistics vs Inferential Statistics to learn more about them in detail. Get To Know Other Data Science Students Peter Liu Business Intelligence Analyst at Indeed Bret Marshall Software Engineer at Growers Edge Jonathan Orr Data Scientist at Carlisle & Company Data Science vs Statistics: The Key Differences Let’s now understand the key differences between data science and statistics. - How is it defined? Data science is an interdisciplinary study that collects inputs from various kinds of data, i.e. structured and unstructured, to analyse and predict future trends based on them. It ideally is used for understanding the real-time problem or scenario with the help of data. Whereas, the definition of statistics is a branch of mathematics which provides a collection of methods to collect, analyse, interpret and represent the data. - What does it do? Data science focuses on solving data-related problems and supports decision-making processes. It also models big data for analysis to understand the trends, behaviours, patterns in data, which help in improving the overall business performance. On the contrary, statistics is used to design and represent real-time data in the form of tables, charts, etc. in order to understand the techniques of analysis and support the process of decision making. - How is it done? Data science applies scientific methods of problem-solving on the random data collected and identifies data requirements and techniques needed to obtain the desired results. On the other hand, statistics uses mathematical formulas and models to estimate values for various data attributes. It helps in showcasing the data behaviours in a pictorial manner. - What does it solve? Data science uses advanced scientific computing techniques like machine learning, advanced mathematics and statistics to derive results and trends from the data. It employs programming, understanding of business models and trends. These data science skills are used to provide perfect predictions. Whereas, statistics is just a process used in data science to measure and estimate a data attribute by applying statistical functions and algorithms on the datasets. - What are the application areas? Data science is now used in a variety of industries for the purpose of market analysis and trend prediction. It is also employed in healthcare systems, fraud and intrusion detection systems, manufacturing value chain and finance. Statistics is mainly used in commerce and trade. However, it is not limited to that area and is also used in economics, psychology, biology, astronomy as a way to conduct detailed data visualisation operations and studies. While learning more about data science vs statistics, we understand how both of them are interlinked. Statistics is as a key process involved in data science. It helps in the visualisation aspect of the data analysis process and also helps data scientists to understand data trends and patterns. The mathematics involved in statistics helps in gaining a deeper insight into the structure of the data, which helps us to identify and apply the right data science technique to derive the optimal results out of them. Data science, on the other hand, is a huge process that manages to isolate, collect, identify, analyse and predict a lot of information from the various datasets obtained. The introduction of big data analysis changes the approach of data science phenomenally. It sets the predicament of how data science skills are further going to develop and change the way data is perceived and used for decision-making. Since you’re here… Curious about a career in data science? Experiment with our free data science learning path, or join our Data Science Bootcamp, where you’ll only pay tuition after getting a job in the field. We’re confident because our courses work – check out our student success stories to get inspired.
https://www.springboard.com/blog/data-science/data-science-vs-statistics/
The real-time traffic information provides significant convenience to road users and city traffic managers. However, the causes for traffic congestion are rarely and fully unexplored. This paper presents a model to harness heterogeneous data from different sources, explains the potential reasons of traffic phenomena and predict the future traffic flow. A prototype system has been built, which works with real-time, heterogeneous data stream including basic traffic data, planned road works, dynamically events, inclement weather or unplanned street closures. Open data source and social media are captured to diagnose road congestion and explore the underlying causes of traffic congestions. The test result illustrates the feasibility of use proposed model in urban traffic control. Keywords Semantic web, OWL, Smart city DOI 10.12783/dtcse/iteee2019/28759 10.12783/dtcse/iteee2019/28759 Full Text:PDF Refbacks - There are currently no refbacks.
http://dpi-proceedings.com/index.php/dtcse/article/view/28759
Summary Report for: 19-2021.00 - Atmospheric and Space Scientists Investigate atmospheric phenomena and interpret meteorological data, gathered by surface and air stations, satellites, and radar to prepare reports and forecasts for public and other uses. Includes weather analysts and forecasters whose functions require the detailed knowledge of meteorology. Sample of reported job titles: Broadcast Meteorologist, Chief Meteorologist, Forecaster, General Forecaster, Hydrometeorological Technician, Meteorologist, Meteorologist-in-Charge, Science and Operations Officer (SOO), Warning Coordination Meteorologist, Weather Forecaster Tasks | Technology Skills | Tools Used | Knowledge | Skills | Abilities | Work Activities | Detailed Work Activities | Work Context | Job Zone | Education | Credentials | Interests | Work Styles | Work Values | Related Occupations | Wages & Employment | Job Openings | Additional Information Tasks - Broadcast weather conditions, forecasts, or severe weather warnings to the public via television, radio, or the Internet or provide this information to the news media. - Prepare weather reports or maps for analysis, distribution, or use in weather broadcasts, using computer graphics. - Interpret data, reports, maps, photographs, or charts to predict long- or short-range weather conditions, using computer models and knowledge of climate theory, physics, and mathematics. - Develop or use mathematical or computer models for weather forecasting. - Gather data from sources such as surface or upper air stations, satellites, weather bureaus, or radar for use in meteorological reports or forecasts. - Prepare forecasts or briefings to meet the needs of industry, business, government, or other groups. - Measure wind, temperature, and humidity in the upper atmosphere, using weather balloons. - Conduct numerical simulations of climate conditions to understand and predict global or regional weather patterns. - Direct forecasting services at weather stations or at radio or television broadcasting facilities. - Formulate predictions by interpreting environmental data, such as meteorological, atmospheric, oceanic, paleoclimate, climate, or related information. - Prepare scientific atmospheric or climate reports, articles, or texts. - Perform managerial duties, such as creating work schedules, creating or implementing staff training, matching staff expertise to situations, or analyzing performance of offices. - Consult with other offices, agencies, professionals, or researchers regarding the use and interpretation of climatological information for weather predictions and warnings. - Conduct meteorological research into the processes or determinants of atmospheric phenomena, weather, or climate. - Analyze historical climate information, such as precipitation or temperature records, to help predict future weather or climate trends. - Analyze climate data sets, using techniques such as geophysical fluid dynamics, data assimilation, or numerical modeling. - Design or develop new equipment or methods for meteorological data collection, remote sensing, or related applications. - Apply meteorological knowledge to issues such as global warming, pollution control, or ozone depletion. - Research the impact of industrial projects or pollution on climate, air quality, or weather phenomena. - Teach college-level courses on topics such as atmospheric and space science, meteorology, or global climate change. Technology Skills - Analytical or scientific software — IBM SPSS Statistics ; PC Weather Products HURRTRAK; Systat Software SigmaStat; WSI TrueView Professional (see all 33 examples) - Data base user interface and query software — Microsoft Access - Desktop publishing software — QuarkXPress - Development environment software — Formula translation/translator FORTRAN - Electronic mail software — Microsoft Outlook - Graphics or photo imaging software — AccuWeather Galileo; Adobe Systems Adobe Photoshop ; Advanced Visual Systems AVS/Express; Microsoft Paint (see all 5 examples) - Map creation software — ESRI ArcInfo; ESRI ArcView; ITT Visual Information Solutions ENVI - Object or component oriented development software — C++ ; Practical extraction and reporting language Perl ; Python ; R - Operating system software — Cisco IOS; Linux - Presentation software — Microsoft PowerPoint - Spreadsheet software — Microsoft Excel - Word processing software — Microsoft Word Hot Technology — a technology requirement frequently included in employer job postings. Tools Used - Air samplers or collectors — Air quality samplers - Anemometers — Analog anemometers; Digital anemometers - Barometers — Mercury barometers - Desktop computers - Hygrometers — Whirling hygrometers - Light trucks or sport utility vehicles — Storm chase vehicles - Lightmeters — Light meters - Meteorology instrument accessories — Weather balloons - Notebook computers — Laptop computers - Personal computers - Psychrometers - Radarbased surveillance systems — Doppler radar equipment; Next Generation Weather Radar NEXRAD - Radiosonde apparatus — Radiosonde launchers - Rainfall recorders — Rain gauges; Tipping bucket rain gauges - Solar radiation surface observing apparatus — Solarimeters - Surface thermometers — Surface temperature probes - Tablet computers — Graphic tablets - Temperature or humidity surface observing apparatus — Air temperature thermometers; Relative humidity gauges; Temperature and humidity data loggers - Temperature transmitters — Soil temperature probes - Two way radios - Weather stations — Weather observation stations Knowledge - Physics — Knowledge and prediction of physical principles, laws, their interrelationships, and applications to understanding fluid, material, and atmospheric dynamics, and mechanical, electrical, atomic and sub- atomic structures and processes. - Mathematics — Knowledge of arithmetic, algebra, geometry, calculus, statistics, and their applications. - Geography — Knowledge of principles and methods for describing the features of land, sea, and air masses, including their physical characteristics, locations, interrelationships, and distribution of plant, animal, and human life. - Computers and Electronics — Knowledge of circuit boards, processors, chips, electronic equipment, and computer hardware and software, including applications and programming. - English Language — Knowledge of the structure and content of the English language including the meaning and spelling of words, rules of composition, and grammar. - Communications and Media — Knowledge of media production, communication, and dissemination techniques and methods. This includes alternative ways to inform and entertain via written, oral, and visual media. - Customer and Personal Service — Knowledge of principles and processes for providing customer and personal services. This includes customer needs assessment, meeting quality standards for services, and evaluation of customer satisfaction. - Education and Training — Knowledge of principles and methods for curriculum and training design, teaching and instruction for individuals and groups, and the measurement of training effects. - Public Safety and Security — Knowledge of relevant equipment, policies, procedures, and strategies to promote effective local, state, or national security operations for the protection of people, data, property, and institutions. Skills - Reading Comprehension — Understanding written sentences and paragraphs in work related documents. - Science — Using scientific rules and methods to solve problems. - Active Listening — Giving full attention to what other people are saying, taking time to understand the points being made, asking questions as appropriate, and not interrupting at inappropriate times. - Critical Thinking — Using logic and reasoning to identify the strengths and weaknesses of alternative solutions, conclusions or approaches to problems. - Speaking — Talking to others to convey information effectively. - Active Learning — Understanding the implications of new information for both current and future problem-solving and decision-making. - Complex Problem Solving — Identifying complex problems and reviewing related information to develop and evaluate options and implement solutions. - Judgment and Decision Making — Considering the relative costs and benefits of potential actions to choose the most appropriate one. - Writing — Communicating effectively in writing as appropriate for the needs of the audience. - Instructing — Teaching others how to do something. - Monitoring — Monitoring/Assessing performance of yourself, other individuals, or organizations to make improvements or take corrective action. - Systems Analysis — Determining how a system should work and how changes in conditions, operations, and the environment will affect outcomes. - Learning Strategies — Selecting and using training/instructional methods and procedures appropriate for the situation when learning or teaching new things. - Social Perceptiveness — Being aware of others' reactions and understanding why they react as they do. - Time Management — Managing one's own time and the time of others. - Service Orientation — Actively looking for ways to help people. - Coordination — Adjusting actions in relation to others' actions. - Persuasion — Persuading others to change their minds or behavior. - Systems Evaluation — Identifying measures or indicators of system performance and the actions needed to improve or correct performance, relative to the goals of the system. Abilities - Oral Expression — The ability to communicate information and ideas in speaking so others will understand. - Written Comprehension — The ability to read and understand information and ideas presented in writing. - Written Expression — The ability to communicate information and ideas in writing so others will understand. - Oral Comprehension — The ability to listen to and understand information and ideas presented through spoken words and sentences. - Speech Clarity — The ability to speak clearly so others can understand you. - Deductive Reasoning — The ability to apply general rules to specific problems to produce answers that make sense. - Inductive Reasoning — The ability to combine pieces of information to form general rules or conclusions (includes finding a relationship among seemingly unrelated events). - Problem Sensitivity — The ability to tell when something is wrong or is likely to go wrong. It does not involve solving the problem, only recognizing there is a problem. - Information Ordering — The ability to arrange things or actions in a certain order or pattern according to a specific rule or set of rules (e.g., patterns of numbers, letters, words, pictures, mathematical operations). - Near Vision — The ability to see details at close range (within a few feet of the observer). - Speech Recognition — The ability to identify and understand the speech of another person. - Flexibility of Closure — The ability to identify or detect a known pattern (a figure, object, word, or sound) that is hidden in other distracting material. - Mathematical Reasoning — The ability to choose the right mathematical methods or formulas to solve a problem. - Category Flexibility — The ability to generate or use different sets of rules for combining or grouping things in different ways. - Fluency of Ideas — The ability to come up with a number of ideas about a topic (the number of ideas is important, not their quality, correctness, or creativity). - Far Vision — The ability to see details at a distance. - Number Facility — The ability to add, subtract, multiply, or divide quickly and correctly. - Originality — The ability to come up with unusual or clever ideas about a given topic or situation, or to develop creative ways to solve a problem. - Selective Attention — The ability to concentrate on a task over a period of time without being distracted. - Visual Color Discrimination — The ability to match or detect differences between colors, including shades of color and brightness. Work Activities - Getting Information — Observing, receiving, and otherwise obtaining information from all relevant sources. - Interacting With Computers — Using computers and computer systems (including hardware and software) to program, write software, set up functions, enter data, or process information. - Updating and Using Relevant Knowledge — Keeping up-to-date technically and applying new knowledge to your job. - Interpreting the Meaning of Information for Others — Translating or explaining what information means and how it can be used. - Analyzing Data or Information — Identifying the underlying principles, reasons, or facts of information by breaking down information or data into separate parts. - Making Decisions and Solving Problems — Analyzing information and evaluating results to choose the best solution and solve problems. - Processing Information — Compiling, coding, categorizing, calculating, tabulating, auditing, or verifying information or data. - Communicating with Persons Outside Organization — Communicating with people outside the organization, representing the organization to customers, the public, government, and other external sources. This information can be exchanged in person, in writing, or by telephone or e-mail. - Communicating with Supervisors, Peers, or Subordinates — Providing information to supervisors, co-workers, and subordinates by telephone, in written form, e-mail, or in person. - Identifying Objects, Actions, and Events — Identifying information by categorizing, estimating, recognizing differences or similarities, and detecting changes in circumstances or events. - Organizing, Planning, and Prioritizing Work — Developing specific goals and plans to prioritize, organize, and accomplish your work. - Thinking Creatively — Developing, designing, or creating new applications, ideas, relationships, systems, or products, including artistic contributions. - Documenting/Recording Information — Entering, transcribing, recording, storing, or maintaining information in written or electronic/magnetic form. - Establishing and Maintaining Interpersonal Relationships — Developing constructive and cooperative working relationships with others, and maintaining them over time. - Performing for or Working Directly with the Public — Performing for people or dealing directly with the public. This includes serving customers in restaurants and stores, and receiving clients or guests. - Training and Teaching Others — Identifying the educational needs of others, developing formal educational or training programs or classes, and teaching or instructing others. - Monitor Processes, Materials, or Surroundings — Monitoring and reviewing information from materials, events, or the environment, to detect or assess problems. - Estimating the Quantifiable Characteristics of Products, Events, or Information — Estimating sizes, distances, and quantities; or determining time, costs, resources, or materials needed to perform a work activity. - Coordinating the Work and Activities of Others — Getting members of a group to work together to accomplish tasks. - Coaching and Developing Others — Identifying the developmental needs of others and coaching, mentoring, or otherwise helping others to improve their knowledge or skills. - Provide Consultation and Advice to Others — Providing guidance and expert advice to management or other groups on technical, systems-, or process-related topics. Detailed Work Activities - Provide technical information or assistance to public. - Interpret research or operational data. - Collect environmental data or samples. - Develop theories or models of physical phenomena. - Prepare scientific or technical reports or presentations. - Measure environmental characteristics. - Develop mathematical models of environmental conditions. - Direct technical activities or operations. - Prepare research or technical reports on environmental issues. - Conduct climatological research. - Collaborate on research activities with scientists or technical specialists. - Develop environmental research methods. - Instruct college students in physical or life sciences. - Apply knowledge or research findings to address environmental problems. - Research environmental impact of industrial or development activities. - Create images or other visual displays. Find occupations related to multiple detailed work activities Work Context - Electronic Mail — 93% responded “Every day.” - Time Pressure — 82% responded “Every day.” - Face-to-Face Discussions — 79% responded “Every day.” - Telephone — 74% responded “Every day.” - Freedom to Make Decisions — 59% responded “A lot of freedom.” - Indoors, Environmentally Controlled — 89% responded “Every day.” - Importance of Being Exact or Accurate — 57% responded “Extremely important.” - Spend Time Sitting — 50% responded “Continually or almost continually.” - Deal With External Customers — 57% responded “Extremely important.” - Work With Work Group or Team — 50% responded “Extremely important.” - Impact of Decisions on Co-workers or Company Results — 52% responded “Very important results.” - Structured versus Unstructured Work — 61% responded “Some freedom.” - Frequency of Decision Making — 54% responded “Every day.” - Contact With Others — 46% responded “Constant contact with others.” - Level of Competition — 36% responded “Highly competitive.” - Duration of Typical Work Week — 71% responded “40 hours.” - Coordinate or Lead Others — 44% responded “Important.” - Public Speaking — 36% responded “Once a month or more but not every week.” - Importance of Repeating Same Tasks — 29% responded “Very important.” - Responsibility for Outcomes and Results — 54% responded “Moderate responsibility.” - Letters and Memos — 50% responded “Once a month or more but not every week.” - Physical Proximity — 64% responded “Slightly close (e.g., shared office).” Job Zone |Title||Job Zone Four: Considerable Preparation Needed| |Education||Most of these occupations require a four-year bachelor's degree, but some do not.| |Related Experience||A considerable amount of work-related skill, knowledge, or experience is needed for these occupations. For example, an accountant must complete four years of college and work for several years in accounting to be considered qualified.| |Job Training||Employees in these occupations usually need several years of work-related experience, on-the-job training, and/or vocational training.| |Job Zone Examples||Many of these occupations involve coordinating, supervising, managing, or training others. Examples include accountants, sales managers, database administrators, graphic designers, chemists, art directors, and cost estimators.| |SVP Range||(7.0 to < 8.0)| Education Credentials Interests Interest code: IR Want to discover your interests? Take the O*NET Interest Profiler at My Next Move. - Investigative — Investigative occupations frequently involve working with ideas, and require an extensive amount of thinking. These occupations can involve searching for facts and figuring out problems mentally. - Realistic — Realistic occupations frequently involve work activities that include practical, hands-on problems and solutions. They often deal with plants, animals, and real-world materials like wood, tools, and machinery. Many of the occupations require working outside, and do not involve a lot of paperwork or working closely with others. Work Styles - Dependability — Job requires being reliable, responsible, and dependable, and fulfilling obligations. - Analytical Thinking — Job requires analyzing information and using logic to address work-related issues and problems. - Attention to Detail — Job requires being careful about detail and thorough in completing work tasks. - Stress Tolerance — Job requires accepting criticism and dealing calmly and effectively with high stress situations. - Cooperation — Job requires being pleasant with others on the job and displaying a good-natured, cooperative attitude. - Adaptability/Flexibility — Job requires being open to change (positive or negative) and to considerable variety in the workplace. - Initiative — Job requires a willingness to take on responsibilities and challenges. - Persistence — Job requires persistence in the face of obstacles. - Achievement/Effort — Job requires establishing and maintaining personally challenging achievement goals and exerting effort toward mastering tasks. - Integrity — Job requires being honest and ethical. - Self Control — Job requires maintaining composure, keeping emotions in check, controlling anger, and avoiding aggressive behavior, even in very difficult situations. - Independence — Job requires developing one's own ways of doing things, guiding oneself with little or no supervision, and depending on oneself to get things done. - Leadership — Job requires a willingness to lead, take charge, and offer opinions and direction. - Innovation — Job requires creativity and alternative thinking to develop new ideas for and answers to work-related problems. - Concern for Others — Job requires being sensitive to others' needs and feelings and being understanding and helpful on the job. Work Values - Achievement — Occupations that satisfy this work value are results oriented and allow employees to use their strongest abilities, giving them a feeling of accomplishment. Corresponding needs are Ability Utilization and Achievement. - Independence — Occupations that satisfy this work value allow employees to work on their own and make decisions. Corresponding needs are Creativity, Responsibility and Autonomy. - Relationships — Occupations that satisfy this work value allow employees to provide service to others and work with co-workers in a friendly non-competitive environment. Corresponding needs are Co-workers, Moral Values and Social Service. Related Occupations Wages & Employment Trends |Median wages (2018)||$45.25 hourly, $94,110 annual| |State wages| |Employment (2016)||10,000 employees| |Projected growth (2016-2026)||Faster than average (10% to 14%)| |Projected job openings (2016-2026)||900| |State trends| |Top industries (2016)| Source: Bureau of Labor Statistics 2018 wage data and 2016-2026 employment projections . "Projected growth" represents the estimated change in total employment over the projections period (2016-2026). "Projected job openings" represent openings due to growth and replacement. Job Openings on the Web Sources of Additional Information Disclaimer: Sources are listed to provide additional information on related jobs, specialties, and/or industries. Links to non-DOL Internet sites are provided for your convenience and do not constitute an endorsement.
https://www.onetonline.org/link/summary/19-2021.00
Digital twin technology offers exciting possibilities to many industries to know what is happening in real-time, plan strategies and predict problems. Among them are data centers looking for new solutions to improve their operations and reduce their energy consumption and carbon footprint but are not ready to use other next-generation keys, such as artificial intelligence. In recent years, the data center industry has been improving its facilities’ monitoring systems and implementing new technologies to optimize operations and reduce consumption. Many operators are considering using artificial intelligence to manage energy consumption and carbon emissions, but there are numerous challenges related to data quality and availability. In many cases, the information that can be obtained is limited to homogeneous environments and does not represent the current complexity and diversity in the facilities. Therefore, the most important challenge for most data centers is improving their operations’ monitoring at multiple levels. This problem could be solved with new technologies such as digital twins. These are fed with all the data related to the operations and allow the creation of a virtual representation of the physical environments with great detail and different uses. On the one hand, there is real-time or near real-time monitoring of what is happening in the data center. On the other hand, they allow simulations to be carried out to discover what would happen if problems of a different nature occur or new systems are implemented. This proposal is advancing thanks to the work of certain specialized companies, such as the Singapore firm Red Dot Analytics (RDA), which has created an artificial intelligence-driven digital twin platform specifically focused on data centers. Its creators claim that it allows operators to simulate their operations in great detail, helping to manage their carbon footprint and reduce energy consumption. This company was born at the Nanyang Technological University (NTU). Since its creation, it has focused on the simulation of actions aimed at improving operational and energy efficiency and helping operators better understand the costs and risks of each step in this path. Additionally, this solution can be used to implement machine learning models that would help improve data center operations at different levels. Regarding Computer Weekly, RDA chief scientist and NTU professor Wen Yonggang explains that “different people have different interpretations of what a digital twin should be, and that has been a big challenge when we work with partners from the industry, customers and stakeholders. He explains that most digital twins are “just” virtual representations of the physical infrastructure, but he sees this as just the first layer of this platform. It highlights that many other capabilities can be integrated into it, such as the overlay of operational data to carry out statistical analysis and diagnostics and achieve predictive and prescriptive capabilities that drive decision-making for data center operators. And he claims his digital twin can help operators reduce facility power consumption by 40% without having to modify the hardware. Another approach they have taken at this company is to help operators better understand their carbon emissions and identify ways to reduce them. And they’re also working with other customers on asset management to minimize downtime in data centers. In any case, the objective of this digital twin platform is to be used as a support for decision-making that allows converting the best practices of the industry into a more scientific way of carrying out operations. The challenge that data centers face is that, in many cases, they do not have enough sources of information since they have obsolete data collection systems and multiple sources that are unable to unify. But When explains that digital twins offer the ability to start with a small amount of data to build the foundation of the virtual representation and then feed in more data to scale and refine the model.
https://www.webupdatesdaily.com/digital-twins-to-improve-data-center-operations/amp/
If humanity is to feed itself and live in safe and sustainable environments, it is imperative that we have the tools and information to make smarter and better-informed decisions. We need "smart communities": smart cities, smart rural areas, and smart nations. The Internet of Things (IoT), in which millions of sensors and computers are to be connected together and communicating with each other, will be a key platform for making this possible. CSL researchers are working to make smart communities a reality by enhancing our ability to network physical objects embedded with electronic devices, to collect and exchange data, to create cyber-physical systems, and to deliver better decisions through data science technologies. Changing Health IT Through: Whether at the site of a natural disaster, on the battlefield, or at a multi-vehicle accident site, getting reliable, accurate information to and from first responders can save the lives of our families, neighbors, defenders, and other fellow citizens. One day, first responders in an earthquake zone where networks have been disrupted may utilize CSL-developed algorithms to deploy drone-based network repeaters to share patient data in real time with remotely located doctors. CSL researchers are pushing the boundaries of distributed network and communication systems to ensure trustworthy augmented data communications under various scenarios. Explore Our Research in this Area: Transportation and smart mobility Sensors, computing systems, and trustworthy networks will be central to future solutions to a wide range of societal challenges, whether we’re seeking better ways to transport agricultural products from our farms, improving traffic monitoring and management in clogged urban streets, or working to enable an age of self-charging autonomous and semi-autonomous electrical vehicles. For example, cell phones and other devices carried by individuals and in vehicles can be used to monitor traffic movements and ensure safety in extreme situations where masses of people might otherwise be driven to panic. CSL researchers and their partners have been exploring fundamental properties of such future technologies. Explore Our Research in this Area: In both rural and urban areas, getting more accurate and complete information before events occur (such as flooding) could reduce disruption to the lives of the people affected. Whether it be in our most crowded cities or in rural areas of the world’s poorest nations, treating sewage to produce safe drinking water, or at least industrially usable water, can make the difference between a vibrant community and disaster. Developing better sensors and control systems could also translate into significant energy savings, as providing drinkable water to dense urban environments requires vast amounts of energy. CSL researchers and their partners are making strides to solve related information science and technology challenges. Explore Our Research in this Area: The modern world depends on the availability of vast amounts of safe, reliable energy. To meet future needs, we must more efficiently use the energy to which we already have access while also bringing new sources of power on-line. To succeed, we have to secure the “supervisory control and data acquisition” (SCADA) systems that control oil and gas facilities, nuclear plants, and other elements of our electric power distribution system. Developing trustworthy means to gather and process data from smart meters will allow us to automate energy conservation measures. We will broaden our options as we find new ways to reliably interface highly unreliable energy sources, such as wind and solar, with the power grid on a scale far beyond what is now in place. CSL researchers are pursuing different dimensions of these challenges.
https://csl.illinois.edu/research/impact-areas/internet-things
The WC-130 Hercules is a modified version of the C-130 transport configured with computerized weather instrumentation for penetration of severe storms to obtain data on storm movements, dimensions and intensity. The WC-130B became operational in 1959, the WC-130E in 1962, the WC-130H in 1964, followed by the WC-130J in 1999. The WC-130 provides vital tropical cyclone forecasting information. It penetrates tropical cyclones and hurricanes at altitudes ranging from 500 to 10,000 feet (152-3,048m) above the ocean surface to collect meteorological data in the vortex, or eye, of the storm. The aircraft normally flies a radius of about 100 miles (161km) from the vortex to collect detailed data about the structure of the tropical cyclone. The information collected makes possible advance warning of hurricanes and typhoons, and increases the accuracy of hurricane predictions and warnings by 30 percent. Collected data are relayed directly to the National Hurricane Center in Miami, Florida. The WC-130 is capable of staying aloft almost 18 hours at an optimum cruise speed of more than 300 miles per hour. An average weather reconnaissance mission might last 11 hours and cover almost 3,500 miles (5,633km). The crew collects and reports weather data every 30 seconds. From the flight deck, the aerial reconnaissance weather officer operates the computerized weather reconnaissance equipment to measure outside air temperature, dew point (humidity), altitude of the aircraft and barometric pressure at that height. The weather officer also evaluates other meteorological conditions such as turbulence, icing, visibility, cloud types and amounts, and ocean surface winds. Other special equipment on board the WC-130 includes the dropsonde. This is a cylindrically-shaped instrument about 16 inches (40.6cm) long and 3.25 inches (8.3cm) in diameter. The dropsonde is equipped with a high frequency radio and other sensing devices and is released from the rear of the aircraft about every 400 miles (644km), and each pass through the eye. As the instrument descends to the ocean surface, it measures and relays to the aircraft a vertical atmospheric profile of the temperature, humidity, and barometric pressure and wind data. The dropsonde is slowed and stabilized by a small parachute. The Dropsonde System Operator receives, analyzes and encodes the data for transmission by satellite. The WC-130 is flown exclusively from Keesler Air Force Base, MS, by Air Force Reserve organizations known as Hurricane Hunters. The hurricane reconnaissance area includes the Atlantic Ocean, Caribbean Sea, Gulf of Mexico and central Pacific Ocean areas. On 12 October 1999, the U.S. Air Force took delivery of its first WC-130J aircraft. Nine others are scheduled for delivery by late-2000. In September 1998, the C-130J Development System Office (DSO) at Wright-Patterson AFB, OH, signed a contract with Lockheed Martin Aeronautical Systems, Marietta, GA, to modify six C-130Js to the “W”, or weather, configuration. This involved installing and integrating special avionics and weather sensors, as well as making structural modifications. The DSO later exercised contract options to modify an additional four C-130J aircraft. The WC-130Js will replace the existing fleet of ten WC-130H-model aircraft. The “J-models” are based on the familiar C-130 platform that the Air Force has flown for more than 40 years, but with many improvements, including new engines and avionics, as well as the addition of two mission computers and two head-up displays. Sensors mounted on the outside of WC-130Js provide real-time temperature, humidity, barometric pressure, radar-measured altitude, wind speed and direction. These are used to calculate a complete weather observation every 30 seconds. These aircraft also deploy dropsondes, instruments ejected out the aircraft and deployed by parachute through the storm to the sea. During descent, they gather real-time weather data and relay it back to the aircraft. This information is transmitted by satellite directly to the National Hurricane Center for input into the national weather data networks. Forecasters use the data to better predict the path of a storm or hurricane.
https://www.theaviationzone.com/lockheed-wc-130/
Billions of birds migrate across the globe each year, and, in our modern environment, many collide with human-made structures and vehicles. The ability to predict peak timing and locations of migratory events could greatly improve our ability to reduce such collisions. Van Doren and Horton used radar and atmospheric-condition data to predict the peaks and flows of migrating birds across North America. Their models predicted, with high accuracy, patterns of bird migration at altitudes between 0 and 3000 meters and as far as 7 days in advance, a time span that will allow for planning and preparation around these important events. Science, this issue p. 1115 Abstract Billions of animals cross the globe each year during seasonal migrations, but efforts to monitor them are hampered by the unpredictability of their movements. We developed a bird migration forecast system at a continental scale by leveraging 23 years of spring observations to identify associations between atmospheric conditions and bird migration intensity. Our models explained up to 81% of variation in migration intensity across the United States at altitudes of 0 to 3000 meters, and performance remained high in forecasting events 1 to 7 days in advance (62 to 76% of variation was explained). Avian migratory movements across the United States likely exceed 500 million individuals per night during peak passage. Bird migration forecasts will reduce collisions with buildings, airplanes, and wind turbines; inform a variety of monitoring efforts; and engage the public. Billions of birds migrate between distant breeding and wintering sites each year, through landscapes and airspaces increasingly transformed by humans. Hundreds of millions die annually from collisions with buildings, automobiles, and energy installations (1), and light pollution exacerbates these effects (2). Pulses of intense migration interspersed with periods of low activity characterize birds’ movements aloft (3, 4), and efforts to reduce negative effects on migrants (e.g., turning off lights and wind turbines at strategic times) (5) would be most effective if they targeted the few nights with intense migratory pulses. However, bird movements are challenging to predict days or even hours in advance. For decades, scientists have studied the drivers of avian migration. Winds, temperature, barometric pressure, and precipitation play key roles (6–8). However, such general relationships have not produced migration forecasts accurate at both broad continental extents and fine spatial and temporal resolutions (9, 10). Local topography, regional geography, and time of season modify relationships between conditions and migration intensity, and hundreds of species with diverse behaviors frequently pass over a single location during migration. The complex interactions between environmental conditions and animal behavior make predicting bird migration at the assemblage level a challenge. One major difficulty has been amassing behavioral data that appropriately characterize bird migration at a continental scale. Radar, used globally as a tool to study animal migration (3, 11–14), offers a realistic solution to monitor hundreds of species (15). In the continental United States, the Next Generation Weather Radar (NEXRAD) network comprises 143 weather surveillance radars (16) and an archive with more than two decades of data. Although designed for meteorological applications, these radars measure energy reflected by a diversity of aerial targets, including birds. Only recently have advances in computational methods [e.g., (17)] facilitated the use of the entire radar archive for longitudinal studies of bird migration at continental scales. Using the NEXRAD archive, we quantified 23 years (1995 to 2017) of spring nocturnal bird migration across the United States (Fig. 1). We developed a classifier to eliminate radar scans contaminated with precipitation. We then trained gradient-boosted trees (18) to predict bird migration intensity from atmospheric conditions reported by the North American Regional Reanalysis (19). Our model used 12 predictors, including winds, air temperature, barometric pressure, and relative humidity (fig. S1), which we used to predict a cube-root-transformed index of migration intensity (expressed in square centimeters per cubic kilometer). The cube-root transform reduces skewness but is less extreme than a log transformation, which would have given considerable weight to biologically unimportant differences between small values. We measured migration intensity in 100-m altitude bins up to 3 km to model the three-dimensional distribution of migrating birds over the continent. To express migration intensities in numbers of birds, we assumed a radar cross section per bird of 11 cm2. The radar cross section is a measure of reflected energy; this value is typical of medium-sized songbirds and representative of migratory species (12). Our migration forecast model explained 78.9% of variation in migration intensity over the United States (Figs. 2 and 3A). Performance was consistent across years (mean yearly coefficient of determination R2 = 0.781 ± 0.010 SD). We quantified the importance of each predictor by calculating gain, a measure of how much predictions improve by adding a given variable. Air temperature was most important, with an average gain more than three times that of the second-ranked predictor, date (fig. S2). High temperatures coincided with large migration pulses (Fig. 4 and figs. S3 and S4). As a predictor of bird migration, temperature likely plays a dual role as an index of spring phenology and a short-term signal for movement, as favorable southerly winds usually accompany warmer air masses. Other important predictors included altitude, longitude, surface pressure, latitude, and wind (fig. S2). The model provides informative predictions several days in advance. We evaluated its utility as a true forecast system with archived weather forecasts from the North American Mesoscale Forecast System (NAM) and Global Forecast System (GFS). NAM has higher spatial resolution but is a shorter-range forecast (12-km grid, 3-day range) than GFS (0.5° grid, >7-day range). We made predictions up to 3 days in advance with NAM and up to 7 days in advance with GFS, expecting performance to degrade with time because of the decreasing accuracy of longer-range weather forecasts. Predictions on the basis of 24-hour NAM forecasts explained 75% of variation in migration intensity, 3-day NAM forecasts explained 71%, and 7-day GFS forecasts explained 62% (fig. S5). The model captures patterns of bird migration across the United States with high spatial accuracy, particularly in the central and eastern regions (fig. S6). We evaluated spatial accuracy over areas without radar coverage by iteratively removing the data from each radar station, retraining the model on the remaining data, and testing performance on the withheld station. Median R2 for withheld stations was 0.72, and R2 was 0.60 or higher for 75% of stations (fig. S7). Spatial variation in performance likely stems from local influences on migratory behavior (e.g., topography), which our model did not explicitly incorporate. Previous research suggests that migration behavior and weather conditions in the days immediately preceding a migration event can predict its intensity [e.g., (10)]. We found that including atmospheric data from the preceding night and 24-hour changes in conditions did improve performance, but not markedly. A model that included atmospheric conditions 24 hours before an event explained 80.1% of variation in migration intensity, and further including observed migration intensity from the previous night increased R2 to 81.3%. Finally, we used model predictions to estimate the total number of birds actively migrating each night across the United States. Summing predictions countrywide, we infer that nightly movements frequently exceed 200 million birds (Fig. 3B). Peak passage occurred in the first half of May, when the median predicted movement size was 422 million birds per night. Although our model tended to underpredict the largest observed movements (Fig. 3A), a conservative forecast system decreases the risk of taking unneeded mitigation action. More accurately predicting the largest migration events may require explicit modeling of migrant flow across the continent, including responses to topographical features (20). Migration forecasts will further ecological research while aiding monitoring and mortality mitigation efforts. Accurate predictions can inform decisions to temporarily shut down lights and wind turbines, halt gas flares, choose airplane flight paths, and take other actions to prevent human and avian mortality (10, 21). Global health workers monitoring avian-borne diseases can use migration forecasts to anticipate bird movements. Further integration of large citizen science datasets with radar observations will provide the means to study species-specific patterns of behavior at a large scale (22), and studying local variation in migratory behavior will lead to more accurate models of atmospheric bird distributions (23). Migration forecast systems have great potential to aid environmental monitoring and conservation efforts; fully realizing this potential will require the cooperation not just of scientists but also of governments and agencies that produce and disseminate radar products (21). Supplementary Materials www.sciencemag.org/content/361/6407/1115/suppl/DC1 Materials and Methods Figs. S1 to S10 This is an article distributed under the terms of the Science Journals Default License.
https://science.sciencemag.org/content/361/6407/1115
Using short-pulse laser technology, we are attempting to venture deeper and deeper into the microscopic world. As far back as the early 1990s, when I was working on my doctorate at TU Wien, I was particularly fascinated by the idea of venturing into ever smaller dimensions of space and time using the extremely short pulses of light that new lasers were making possible in those days – even before I knew what practical use the resulting knowledge might have. The field was then and still is largely unresearched, and it has enormous potential. Back in 1999, Ahmed H. Zewail was awarded the Nobel Prize in Chemistry for his pioneering work on femtosecond spectroscopy, and the 2018 Nobel Prize in Physics honours the contributions of Donna Strickland and Gérard Mourou for the generation of ultrashort, high-intensity laser pulses. We are building on this work. Today, we are interested in gaining a better understanding of microscopic processes involving electrons, atoms and molecules. We also want to find out how these processes affect macroscopic phenomena so that we can then put this knowledge into practice. In the early 2000s, as part of this work, we generated and measured the world’s shortest pulses, in the attosecond region. This allowed us to make real-time observations of electron movements on the atomic scale for the first time. Possible applications in electronics for cancer diagnosis These movements play an important role in many macroscopic processes: they can cause molecules to disintegrate or form new bonds and can therefore impart new functions to biologically relevant molecules or cause them to lose their biological effect. As a result, these processes play a fundamental role in determining how living organisms function and how diseases develop. If we know more about them, we might be able to develop new medications to combat serious diseases such as cancer. Electrons are also the central protagonists of modern electronics. Accordingly, if we want to continue improving the efficiency and performance of electronic devices such as smartphones or computers, for example, we will need to improve our understanding of – and ability to control – electron movements on ever smaller scales. Until recently, attosecond physics was purely an area of basic research. However, two or three years ago, a completely new opportunity emerged for practical applications: we want to use femtosecond and attosecond measuring technology to analyse blood samples and to detect minute changes in their composition. We are investigating whether these changes are specific enough to allow diseases to be diagnosed unambiguously – ideally in their initial stages. For us, this heralds a new era of attosecond physics, because it would be the first practical application to emerge from our relatively young field of research – and one that could have a direct impact on people’s everyday lives. Max Planck Schools bring together expertise from numerous different disciplines Outstanding doctoral students can make key contributions to all of our active fields of research. Although we use our experience, as senior scientists, to define the direction, strategy and aims, it is ultimately the doctoral students’ creativity and fresh ideas that bring the research to life. The Max Planck Schools create new opportunities by allowing their students to spend a certain amount of time at several different locations. This can be very helpful in our field of research, since we are reliant on expertise from many different disciplines. I think that working at different locations gives doctoral students the chance to acquire abilities that are vital to our research and then to incorporate those skills into our work.
https://www.maxplanckschools.de/93791/ferenc-krausz
Hort@ provides highly-specialised services, on a national and international level, in the field of crop production. The aim is to increase the competitiveness of agricultural and agroalimentary companies. OUR SERVICES Each DSS is able to convert complex weather and crop phenomena into easy, clear operative choices in the field. DSSs automatically gather, organise, interpret and integrate information coming from real-time monitoring of the “farming environment”.
https://www.horta-srl.it/en/
As we enter the fourth wave of the industrial revolution and the technologies used in everyday life develop significantly, it is becoming increasingly important to interact with machines and electronic devices in the most effective way possible. Here, the Human-Machine Interface (HMI) comes in as a helpful solution that allows the user to easily communicate with a device. In this publication, created in close collaboration with our expert Przemysław Nogaj, we focus on what HMI is and where it can be used. We also tackle the impact of new technologies on trends and the future of that area. HMI meaning HMI can be defined as a user interface or dashboard that allows interaction and communication between a human and a machine, device, program, or system. Many people use the term primarily to refer to screens, touchscreens, and keyboards in industrial settings, but in fact, it can be applied to all interfaces and interactions that allow communication with any type of device. Historically, the term was created with industrial machines in mind, but now with the growing use of wearable technologies and more digitally advanced home electronics, the HMI can be applied to all devices operated by the user in day-to-day life. Therefore, the modern understanding of the term HMI concerns all of us, whether we work in an industrial area or not. The evolution of HMI comes from the historical use of solutions like Batch Processing and Command-Line Interfaces to the now commonly known Graphical User Interfaces and touchscreens. However, with more and more advanced technologies and commonly used everyday devices, it has to develop further. Therefore, the use of interfaces based not only on touch but also on gestures, voice interaction, and augmented reality is becoming more popular. Regarding the HMI, I would call it more of a Human-Machine Interaction rather than just an Interface. I think that now, interaction should be the more dominant aspect of this technology. HMI is no longer just a simple interface between the operator and an industrial machine. In a world where electronic devices are everywhere, and we use them daily, it is crucial to understand the relevance of the HMI and its proper design. Devices nowadays must deal with many of the tasks of everyday life, so interacting with them should be quick and intuitive so that they can improve the user's quality of life. - Przemysław Nogaj, Spyrosoft’s Head of HMI Technology The use of HMI HMI is used across several industries, in many devices and wearables, to ensure the most effective interaction with machines. This solution can help to optimise the gathering and processing of information, which results in making better and quicker decisions. Industrial use of HMI occurs in sectors such as energy, wastewater, transportation, and healthcare, as well as in manufacturing various types of goods from cars and pharmaceuticals to food and beverages. The HMI can be particularly useful for controlling, monitoring and managing the manufacturing line, diagnosing problems and collecting and visualising data. In this case, it is utilised by professional operators and engineers. Besides industrial usage, there are also HMI solutions incorporated in everyday devices. These days everyone has some sort of wearable, smart TVs, fridges and washing machines. All such devices require the presence of an HMI to meet the daily needs of users and should be designed to fit into their bodies, lives, and cognitive abilities. Some illustrative examples of the use of HMI can be seen in sectors like automotive and healthcare. The first one can apply this solution to boost the functionality and comfort of the vehicle, or even to distinguish the car from other competitors in the market. It can help build brand loyalty and a bigger attachment to the car. Of course, the creation of a suitable HMI is left up to the carmakers. It is up to them to decide whether to use solutions similar to those of other companies, or to stand out and create their own system and a unique user experience. As for the healthcare sector, HMI can be applied to both typical ambulatory machines and personal use devices. In this type of equipment, a properly created interface can improve the diagnostic process and the way of receiving information vital to our health, which can ultimately result in an elevated quality of life for the user or patient. Trends and future of HMI Constantly evolving technology and the new ways and habits in which we use it can encourage companies to develop more innovative solutions for Human-Machine Interaction. According to our expert Przemysław Nogaj, some of the biggest trends in this area are: Voice interaction Communication with the machine through voice and voice recognition is a trend that is already being implemented in many tools. Due to changing consumer habits, especially among young people, and the growing number of users who take voice interaction with the machine as a natural step, this trend will only grow in the future. This solution for many, especially young people, seems to be easier and less time-consuming since they don’t have to press any button or type any words, only say what they actually want. Gestures interaction Another trend that might seem slightly less natural and comfortable is operating the device with gestures. This solution may look effective but can be difficult and inconvenient to use in some cases. For example, in the automotive industry, where gesture interaction has been incorporated into cars, it has proven difficult to maintain the precision of movements, especially when driving. Overall, this technology still has some room for improvement, but with the right approach and modifications, it can be managed effectively. At Spyrosoft we have our own concept about gestures in HMIs. Below you will find some thoughts from our expert - Przemysław Nogaj, about that idea: We still think that touching, and feeling something under your hand, is rather a natural way of communicating between humans and machines. What is needed in this solution is more stability during the interaction. Therefore, our idea is based on gestures made on a touchscreen or touch-sensitive device. This would mean that the device doesn't need to have a visual interface, but still allows a range of operations and commands to be performed using gestures that the user can quickly master. That solution could be applied, e.g., to cars where settings like air conditioning could be changed with a simple three-finger swipe to the left or right. This way, gestures are easier to perform, even without looking at the screen during the ride. AR and XR tools An important direction for HMI is the use of technologies like Extended and Augmented Reality. These solutions follow the growing interest of customers. Over the last few years, AR and XR tools have matured and devices such as Magic Leap have started to appear on the market. They are designed to eliminate the flaws and inconveniences of earlier versions of the technologies and to offer improved solutions. Augmented Reality tools also have great potential in the manufacturing area, where they can connect operators and enable them to interact with machines and data in real-time from different locations. Such a solution supports the mobility of employees and increases their efficiency and productivity. To learn more about AR check our Augmented reality 101 article. AI and Machine Learning The primary goal of HMI is to make the interaction between user and device easier. Therefore, the use of AI-powered HMI, which makes devices and communication processes with them smarter, seems to be a natural step in the future. In time, this solution can help to gather big amounts of data that will lead to a better human-machine understanding. The system will be able to learn the user behaviour and its context, and then adapt that knowledge and make suggestions, which will improve the decision-making process and efficiency of the machine. Gamification An interesting trend in HMI is also the use of solutions that allow the gamification of interactions with devices. As humans, we like to be rewarded through prizes, new experiences, or new achievements, such as the acknowledgement of making our 100th coffee in the coffee machine. All this makes us want to interact further and see what happens next. Thanks to that, we are more involved and motivated, which can contribute to the further use of a device, or general way of work. Robotics Another trend comes from the ever-evolving field of robotics and the need to determine what the Human-Machine Interfaces should look like in that area. There is some fear that machines will become too human. Therefore, manufacturers must not overdo the humanoid appearance of their devices, as this may cause some resistance among consumers. However, people still like to see some small human elements in machines, like emotions. This could be easily solved with a display that shows emojis as a reaction to interaction with a person. This allows us to build a relationship with the machine and also to create some brand loyalty. Neurological connection To look even further into the future, there are already ideas like Neuralink, that want to create a neurological connection between people and machines. This technology aims to make it possible to control the device with your thoughts and to receive information through sensory feedback. This concept may, still, sound like an abstraction, but it is something that seems to be the next step in improving communication with machines. It would certainly help eliminate some time-consuming steps in the interaction and thus make the entire process faster and more productive. Benefits of using HMI These days, accurate HMI is especially important. In a world of another industrial revolution and societies more often described as augmented societies, most people can’t imagine living without machines and electronic devices. Therefore, the ability to properly connect with machines seems to be crucial for the quality of our everyday life. The use of HMI in various machines, wearables, and devices has many benefits. In day-to-day life, it can help us to get any information we need much quicker, and to conduct daily tasks more conveniently and comfortably. These two are extremely important issues considering how fast we live today, and the information load we are facing. Additionally, HMI combined with the newest technologies can help expand our reality and explore the world in ways we have not known before. For example, interfaces with AR and XR can entirely change our way of travelling. We no longer need paper maps, travel guides or informational stands, all we need is adequate HMI that will give us information about any museum or gallery in real-time and directly to us. Good HMI can also bring many advantages to industrial businesses. Some of these benefits are: - Simplification of manufacturing processes and operations. - Ability to get easy access to critical information and real-time data. - Providing quick, real-time feedback and remote access to the machines. - Possibility of customising and personalising the user interface for maximum ease of use and functionality. - Reducing downtime and improving efficiency and productivity, thus decreasing costs and waste. Over to you The future of HMI is shaped by new and constantly innovating technologies. Things like cloud computing, voice recognition, AR, machine learning, IoT and cognitive computing are all likely to have a great impact on the development of future Human-Machine Interfaces. If you would like to know how to apply the newest HMI solutions and what advantages they can bring to your organisation, get in touch with our team.
https://spyro-soft.com/blog/using-hmi-101-meaning-trends-and-profits
An genetic algorithm (GA) is a search heuristic mechanism that mimics the process of natural evolution. This heuristic mechanism is routinely used to generate useful solutions to optimize and search problems. Genetic algorithms inherits from the larger class of evolutionary algorithm (EA), which generate solutions to optimization problems using techniques of natural evolution, such as inheritance, mutation, selection, and crossover. Fitness Function:- A fitness function is a particular type of function which is objective that is used to summarize a single figure of merit that is, how close a given design solution is to achieving the aims. Moreover, the fitness function must not only correlate closely with the goal put by the designers, it must be also be computed quickly. Speed of execution is very important, as in a typical genetic algorithm which must be iterated many times in order to produce a usable result for a non-trivial problem. There are two main classes of fitness functions exist: one where the fitness function does not change and, as in optimizing a fixed function or testing with a fixed set of test cases; and the other one where the fitness function is mutable. gen algo, 2g and 3g by oureducation.in Introduction to 2.5 g and 3 g Networks:- The second generation cellular system are also called as 2G. 2G phase began in 1990s. 2G system are based on radio technologies which include frequency, code and time division multiple access. Examples of 2G systems are:- GSM (Europe), PDC (Japan), and IS-95(USA). circuit switched data links are used by 2g systems which have transmission speeds of 10-20 kbps up-link and down-link. The demand for higher data rates, the instant availability, the data volume-based charging and lack of radio spectrum allocated for 2G led to the introduction of 2.5G and 3G. Examples of 2.5G systems are:- GPRS and PDC-P. Examples of 3G systems are:- wideband CDMA and CDMA2000. 3G systems provide both packet-switched and circuit-switched connectivity. GATE Syllabus- 1. Gate syllabus for Electronics and Communicaton engineering 2014 IES Syllabus- 1. IES Syllabus for Electronics and Telecomm 2. IES Syllabus for General Ability Tell us Your Queries, Suggestions and Feedback 2 Responses to Introduction of genetic algorithm and 2.5 g and 3 g Networks We all are familiar with 3g network.This article is very informative.
https://blog.oureducation.in/introduction-of-genetic-algorithm-and-2-5-g-and-3-g-networks/
Because the squared differences make it easier to derive a regression line. Indeed, to find that line we need to compute the first derivative of the Cost function, and it is much harder to compute the derivative of absolute values than squared values. Minimizing the Cost FunctionThe goal of any Machine Learning Algorithm is to minimize the Cost Function. This is because a lower error between the actual and the predicted values signifies that the algorithm has done a good job in learning. Since we want the lowest error value, we want those‘ m’ and ‘b’ values which give the smallest possible error. How do we actually minimize any function?If we look carefully, our Cost function is of the form Y = X² . In a Cartesian coordinate system, this is an equation for a parabola and can be graphically represented as :ParabolaTo minimise the function above, we need to find that value of X that produces the lowest value of Y which is the red dot. It is quite easy to locate the minima here since it is a 2D graph but this may not always be the case especially in case of higher dimensions. For those cases, we need to devise an algorithm to locate the minima, and that algorithm is called Gradient Descent. Gradient DescentGradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. It is an iterative optimisation algorithm used to find the minimum value for a function. IntuitionConsider that you are walking along the graph below, and you are currently at the ‘green’ dot. Your aim is to reach the minimum i. e the ‘red’ dot, but from your position, you are unable to view it. Figure 2Possible actions would be:You might go upward or downwardIf you decide on which way to go, you might take a bigger step or a little step to reach your destination. Essentially, there are two things that you should know to reach the minima, i. e. which way to go and how big a step to take. Gradient Descent Algorithm helps us to make these decisions efficiently and effectively with the use of derivatives. A derivative is a term that comes from calculus and is calculated as the slope of the graph at a particular point. The slope is described by drawing a tangent line to the graph at the point. So, if we are able to compute this tangent line, we might be able to compute the desired direction to reach the minima. We will talk about this in more detail in the later part of the article. The Minimum ValueIn the same figure, if we draw a tangent at the green point, we know that if we are moving upwards, we are moving away from the minima and vice versa. Also, the tangent gives us a sense of the steepness of the slope. The slope at the blue point is less steep than that at the green point which means it will take much smaller steps to reach the minimum from the blue point than from the green point. Mathematical Interpretation of Cost FunctionLet us now put all these learnings into a mathematical formula. In the equation, y = mX+b, ‘m’ and ‘b’ are its parameters. During the training process, there will be a small change in their values. Let that small change be denoted by δ. The value of parameters will be updated as m=m-δm and b=b-δb respectively. Our aim here is to find those values of m and b in y = mx+b , for which the error is minimum i. e values which minimize the cost function. Rewriting the cost function:The idea is that by being able to compute the derivative/slope of the function, we can find the minimum of a function. The Learning rateThis size of steps taken to reach the minimum or bottom is called Learning Rate. We can cover more area with larger steps/higher learning rate but are at the risk of overshooting the minima. On the other hand, small steps/smaller learning rates will consume a lot of time to reach the lowest point. The visualisations below give an idea about the Learning Rate concept. See how in the third figure, we reach the minimum point with the minimum number of steps. This is the optimum learning rate for this problem. We saw that when the learning rate is too high, it takes a lot of steps to converge. On the other hand, when the learning rate is too high, Gradient Descent fails to reach the minimum as can be seen in the visualisation below. SourceExperiment with different learning rates by visiting the link below. Optimizing Learning Rate | Machine Learning Crash Course | Google DevelopersCan you reach the minimum more quickly with a higher learning rate?.Set a learning rate of 1, and keep hitting STEP…developers. google. comDerivativesMachine learning uses derivatives in optimization problems. Optimization algorithms like gradient descent use derivates to actually decide whether to increase or decrease the weights in order to increase or decrease any objective function. If we are able to compute the derivative of a function, we know in which direction to proceed to minimize it. Primarily we shall be dealing with two concepts from calculus :Power RulePower rule calculates the derivative of a variable raised to a power. Chain RuleThe chain rule is used for calculating the derivative of composite functions. The chain rule can also be expressed in Leibniz’s notation as follows:If a variable z depends on the variable y, which itself depends on the variable x, so that y and z are dependent variables, then z, via the intermediate variable of y, depends on x as well. This is called the chain rule and is mathematically written as,Let us understand it through an example:Using the Power and Chain Rule for derivatives, let’s calculate how Cost function changes relative to m and c. This deals with the concept of partial derivatives which says that if there is a function of two variables, then to find the partial derivative of that function w. r. t to one variable, treat the other variable as constant. This will be more clear with an example:Calculating Gradient DescentLet us now apply the knowledge of these rules of calculus in our original equation and find the derivative of the Cost Function w. r. t to both ‘m’ and ‘b’. Revising the Cost Function equation :For simplicity, let us get rid of the summation sign. The summation part is important, especially with the concept of Stochastic gradient descent (SGD ) vs batch gradient descent. During the batch gradient descent, we look at the error of all the training examples at once while in the SGD we look at each error at a time. However, just to keep things simple, we will assume that we are looking at each error one at a time. Now let’s calculate the gradient of Error w. r. t to both m and b :Plugging the values back in the cost function and multiplying it with the learning rate:Now, this 2 in this equation isn’t that significant since it just says that we have a learning rate twice as big or half as big. So let’s just get rid of it too. So, Ultimately, this entire article boils down to two simple equations which represent the equations for Gradient Descent. m¹,b¹ = next position parameters; m⁰,b⁰ = current position parametersHence,to solve for the gradient, we iterate through our data points using our new m and b values and compute the partial derivatives. This new gradient tells us the slope of our cost function at our current position and the direction we should move to update our parameters. The size of our update is controlled by the learning rate. ConclusionThe point of this article was to demonstrate the concept of gradient descent. We used gradient descent as our optimization strategy for linear regression. by drawing the line of best fit to measure the relationship between student heights and weights. However, it is important to note here that the linear regression example has been chosen for simplicity but can be used with other Machine Learning techniques too.
http://news.datascience.org.ua/2019/03/18/understanding-the-mathematics-behind-gradient-descent/
Chapter 2. Fitness Functions An evolutionary architecture supports guided, incremental change across multiple dimensions. our definition As noted, the word guided indicates that some objective exists that architecture should move toward or exhibit. The authors borrow a concept from evolutionary computing called “fitness functions,” used in genetic algorithm design to define success. Evolutionary computing includes a number of mechanisms that allow a solution to gradually emerge via small changes in each generation of the software. At each generation of the solution, the engineer assesses the current state: Is it closer to or further away from the ultimate goal? For example, when using a genetic algorithm to optimize wing design, the fitness function assess wind resistance, weight, air flow, and other characteristics desirable to good wing design. Architects define a fitness function to explain what better is and to help measure when the goal is met. In software, fitness functions check that developers preserve important architectural characteristics. We use this concept to define architectural fitness functions: An architectural fitness function provides an objective integrity assessment of some architectural characteristic(s). our definition The fitness function protects the various architectural characteristics required for the system. The specific architectural requirements differ greatly across systems and organizations, based on business drivers, technical capabilities, ... Get Building Evolutionary Architectures now with the O’Reilly learning platform. O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.
https://www.oreilly.com/library/view/building-evolutionary-architectures/9781491986356/ch02.html
Optimization of Two Variable Function The function to be optimized is given by: [pic] The maximum value of this two variable function is desired, however Matlab’s gatool finds the minimum of fitness functions and so as in the previous example the function must be altered as follows: [pic] Now we must enter this function, as before, into a Matlab function file. Start Matlab and change the working directory to your Knowledge Based Systems folder. (i.e. U:\Current Class\KBS\) Create an m-file by either typing “edit fitness2” at the command prompt, or clicking the new file icon on the toolbar. The Matlab Genetic Algorithm accepts multiple variable functions, however these variables must be contained in an array. Therefore every “x” in the above equation is replaced with “x(1)” and every “y” with “x(2)”. Once the m-file editor is open, enter the following code: function y = fitness2(x) y = -((1-x(1))^2*exp(-x(1)^2-(x(2)+1)^2) - (x(1) - x(1)^3 - x(2)^3)*exp(-x(1)^2-x(2)^2)); end Save the file. (Note: The file must be saved under the same name as the function name. i.e. fitness2 as shown above) In the Matlab command window type “gatool”. This will open the genetic algorithm tool as shown in Figure 1. [pic] Enter the name of your fitness function in the Fitness function text box preceded with an @ symbol as shown in Figure 1. Enter 2 for the number of variables and select Best fitness as the plot option. Run the solver with all the default settings and observe results similar to Figure 2. [pic] The optimum value of x can be seen after the simulation in the gatool window in the Final point field. Record this number Next modify the number of generations in the “Stopping Criteria” drop box. Change the number of generations to 25 and re-run the solution. Your results should resemble those of Figure 3.
https://www.studymode.com/essays/Matlab-Gatool-916622.html
The genetic algorithm is a method for moving from one population of chromosomes (encoded solution) to a new population by using a kind of natural selection together with the genetics-inspired operators of crossover, mutation, and inversion. The genetic algorithm should provide for chromosomal representation of solutions to the problem, creation of an initial population of solutions, an. There are algorithms that create selection pressure in other ways, and you can do whatever works for you. But in the canonical version of a GA, you do selection with replacement. Though, many people find other selection schemes like tournament selection perform better across a pretty wide range of problems than roulette wheel anyway. Genetic Algorithms, also referred to as simply “GA”, are algorithms inspired in Charles Darwin’s Natural Selection theory that aims to find optimal solutions for problems we don’t know much about. For example: How to find a given function maximum or minimum, when you cannot derivate it? It is based on three concepts: selection, reproduction, and mutation. We generate a random set of. In genetic algorithms, the roulette wheel selection operator has essence of exploitation while rank selection is influenced by exploration. In this paper, a blend of these two selection operators is proposed that is a perfect mix of both i.e. exploration and exploitation. The blended selection operator is more exploratory in nature in initial iterations and with the passage of time, it. Need help in roulette wheel Selection in genetic algorithm. It selects the indices of an array using the values as weights. No cumulative weights pseudocode due to the mathematical properties. This could be pseudocode improved using Kahan summation or reading through roulette doubles as an iterable if the array was too big to initialize at once. I wanted the same and so created this self. A set of selection techniques including roulette wheel selection (RWS), linear rank selection (LRS), tournament selection (TS), stochastic remainder selection (SRS), and stairwise selection (SWS) were considered, and their performance was evaluated through ten well-known benchmark functions with 10 to 100 dimensions. These benchmark functions cover various characteristics including convex. This tutorial covers the canonical genetic algorithm as well as more experimental forms of genetic algorithms, including parallel island models and parallel cellular genetic algorithms. The tutorial also illustrates genetic search by hyperplane sampling. The theoretical foundations of genetic algorithms are reviewed, include the schema theorem as well as recently developed exact models of the. Roulette wheel selection Selection of the fittest. The basic part of the selection process is to stochastically select from one generation to create the basis of the next generation. The requirement is that the fittest individuals have a greater chance of survival than weaker ones. This replicates nature in that fitter individuals will tend to have a better probability of survival and will go. Thus Fitness proportionate selection is used, which is also known as roulette wheel selection, is a genetic operator used in genetic algorithms for selecting potentially useful solutions for recombination. 4. Reproduction. Generation of offsprings happen in 2 ways: Crossover; Mutation; a) Crossover. Crossover is the most vital stage in the genetic algorithm. During crossover, a random point is. Roulette Wheel Selection Parents are selected according to their fitness. The better the chromosomes are, the more chances to be selected they have. Imagine a roulette wheel where all the chromosomes in the population are placed. The size of the section in the roulete wheel is proportional to the value of the fitness function of every. Perform roulette wheel selection. A wheel is a fitness proportional roulette wheel as returned by the makeRouletteWheel function. The parameter s is not required thought not disallowed at the time of calling by the evolutionary algorithm. If it is not supplied, it will be set as a random float between 0 and 1. This function returns the individual that bet on the section of the roulette wheel. Roulette-wheel selection is a frequently used method in genetic and evolutionary algorithms or in modeling of complex networks. Existing routines select one of N individuals using search algorithms of O(N) or O(log(N)) complexity. We present a simple roulette-wheel selection algorithm, which typically has O(1) complexity and is based on stochastic acceptance instead of searching. We also. Fitness proportionate selection, also known as roulette wheel selection, is a genetic operator used in genetic algorithms for selecting potentially useful solutions for recombination. A genetic operator is an operator used in genetic algorithms to guide the algorithm towards a solution to a given problem. January 2007: Genetic Algorithms (GAs) Introduction. The term Genetic Algorithm (or GA) describes a set of methods, which can be used to optimise complex problems. As the name suggests, the processes employed by GAs are inspired by natural selection and genetic variation. To achieve this a GA uses a population of possible solutions to a problem and applies a series of processes to them. These. The function of operators in an evolutionary algorithm (EA) is very crucial as the operators have a strong effect on the performance of the EA. In this paper, a new selection operator is introduced for a real valued encoding problem, which specifically exists in a shrimp diet formulation problem. This newly developed selection operator is a hybrid between two well-known established selection.In this series of video tutorials, we are going to learn about Genetic Algorithms, from theory to implementation. After having a brief review of theories behind EA and GA, two main versions of genetic algorithms, namely Binary Genetic Algorithm and Real-coded Genetic Algorithm, are implemented from scratch and line-by-line, using both Python and MATLAB. This course is instructed by Dr.Hello everyone. So I tried implementing a simple genetic algorithm to solve the switch box problem. However, I'm not really sure if my implementation of roulette wheel selection is correct as new generations tends to have individuals with the same fitness value(I know that members with better fitness have a better chance to be chosen, but if I had a population of 10, 8 of them will be the.
http://colligan.vpvipcasinov.xyz/play/Roulette-wheel-selection-genetic-algorithm.html
Problem : Given a cost function f: R^n –> R, find an n-tuple that minimizes the value of f. Note that minimizing the value of a function is algorithmically equivalent to maximization (since we can redefine the cost function as 1-f). Many of you with a background in calculus/analysis are likely familiar with simple optimization for single variable functions. For instance, the function f(x) = x^2 + 2x can be optimized setting the first derivative equal to zero, obtaining the solution x = -1 yielding the minimum value f(-1) = 0. This technique suffices for simple functions with few variables. However, it is often the case that researchers are interested in optimizing functions of several variables, in which case the solution can only be obtained computationally. One excellent example of a difficult optimization task is the chip floor planning problem. Imagine you’re working at Intel and you’re tasked with designing the layout for an integrated circuit. You have a set of modules of different shapes/sizes and a fixed area on which the modules can be placed. There are a number of objectives you want to achieve: maximizing ability for wires to connect components, minimize net area, minimize chip cost, etc. With these in mind, you create a cost function, taking all, say, 1000 variable configurations and returning a single real value representing the ‘cost’ of the input configuration. We call this the objective function, since the goal is to minimize its value. A naive algorithm would be a complete space search — we search all possible configurations until we find the minimum. This may suffice for functions of few variables, but the problem we have in mind would entail such a brute force algorithm to fun in O(n!). Due to the computational intractability of problems like these, and other NP-hard problems, many optimization heuristics have been developed in an attempt to yield a good, albeit potentially suboptimal, value. In our case, we don’t necessarily need to find a strictly optimal value — finding a near-optimal value would satisfy our goal. One widely used technique is simulated annealing, by which we introduce a degree of stochasticity, potentially shifting from a better solution to a worse one, in an attempt to escape local minima and converge to a value closer to the global optimum. Simulated annealing is based on metallurgical practices by which a material is heated to a high temperature and cooled. At high temperatures, atoms may shift unpredictably, often eliminating impurities as the material cools into a pure crystal. This is replicated via the simulated annealing optimization algorithm, with energy state corresponding to current solution. In this algorithm, we define an initial temperature, often set as 1, and a minimum temperature, on the order of 10^-4. The current temperature is multiplied by some fraction alpha and thus decreased until it reaches the minimum temperature. For each distinct temperature value, we run the core optimization routine a fixed number of times. The optimization routine consists of finding a neighboring solution and accepting it with probability e^(f(c) – f(n)) where c is the current solution and n is the neighboring solution. A neighboring solution is found by applying a slight perturbation to the current solution. This randomness is useful to escape the common pitfall of optimization heuristics — getting trapped in local minima. By potentially accepting a less optimal solution than we currently have, and accepting it with probability inverse to the increase in cost, the algorithm is more likely to converge near the global optimum. Designing a neighbor function is quite tricky and must be done on a case by case basis, but below are some ideas for finding neighbors in locational optimization problems. - Move all points 0 or 1 units in a random direction - Shift input elements randomly - Swap random elements in input sequence - Permute input sequence - Partition input sequence into a random number of segments and permute segments One caveat is that we need to provide an initial solution so the algorithm knows where to start. This can be done in two ways: (1) using prior knowledge about the problem to input a good starting point and (2) generating a random solution. Although generating a random solution is worse and can occasionally inhibit the success of the algorithm, it is the only option for problems where we know nothing about the landscape. There are many other optimization techniques, although simulated annealing is a useful, stochastic optimization heuristic for large, discrete search spaces in which optimality is prioritized over time. Below, I’ve included a basic framework for locational-based simulated annealing (perhaps the most applicable flavor of optimization for simulated annealing). Of course, the cost function, candidate generation function, and neighbor function must be defined based on the specific problem at hand, although the core optimization routine has already been implemented. | | Output : -1.0 [X, -, X, X, X] [-, X, X, X, X] [-, X, X, X, X] [-, X, X, X, X] [-, X, X, X, X] Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
https://tutorialspoint.dev/algorithm/mathematical-algorithms/simulated-annealing
Abstract : Motivation Analysing the potential organic molecule for inhibiting HIV-1 protease against its drug resistance by predicting its fitness using Genetic Algorithm will enhance research in the discovery of identifying the potential lead for inhibiting the aspartyl protease of HIV type I. Methods Drug resistance is predicted for all FDA approved HIV-1 protease inhibitors and organic leads synthesized by Dr. Deeb and Dr. Godzari with wild type and mutant strains of subtype B. Initially the structural feature of HIV-1 protease with the inhibitor complex has been anlysed on the basis of “Binding Energies”. Finally the fitness function in Genetic Algorithm was used for optimizing the inhibition of specific organic lead with three fold cross validation. Results Structural data mining performed by the fitness function in Genetic Algorithm gave pattern identities between HIV-1 protease (wild type and mutants) of sub type B against organic leads and FDA approved inhibitors of HIV-1 protease. Genetic Algorithm gives“80% Accuracy” for wild type inhibition and “75% Accuracy” for mutant inhibition in the final optimization by fitness function. Conclusion Organic leads have greater affinity than the FDA approved inhibitors (specifically Mol-23 which has good correlation with pIC50 and H Bonding descriptors). I84V mutant still remains resistant to both FDA approved Inhibitors and Organic Molecules. In future the dynamics of the molecule will be analysed for all FDA approved protease Inhibitors and potential organic leads with the wild type and mutant proteases of HIV type I.
https://www.amrita.edu/publication/analysis-of-drug-resistance-to-hiv-1-protease-using-fitness-function-in-genetic-algorithm/
Line search methods Authors: Lihe Cao, Zhengyi Sui, Jiaqi Zhang, Yuqing Yan, and Yuhui Gu (6800 Fall 2021). Introduction Line search method is an iterative approach to find a local minimum of a multidimensional nonlinear function using the function's gradients. It computes a search direction and then finds an acceptable step length that satisfies certain standard conditions. Line search method can be categorized into exact and inexact methods. The exact method, as in the name, aims to find the exact minimizer at each iteration; while the inexact method computes step lengths to satisfy conditions including Wolfe and Goldstein conditions. Line search and trust-region methods are two fundamental strategies for locating the new iterate given the current point. With the ability to solve the unconstrained optimization problem, line search is widely used in many cases including machine learning, game theory and other fields. Generic Line Search Method Basic Algorithm - Pick the starting point - Repeat the following steps until coverges to a local minimum : - Choose a descent direction starting at , defined as: for - Find a step length so that - Set Search Direction for Line Search The direction of the line search should be chosen to make decrease moving from point to , and it is usually related to the gradient . The most obvious direction is the because it is the one to make decreases most rapidly. This claim can be verified by Taylor's theorem: , where . The rate of change in along the direction at is the coefficient of . Therefore, the unit direction of most rapid decrease is the solution to . is the solution and this direction is orthogonal to the contours of the function. In the following sections, this will be used as the default direction of the line search. However, the steepest descent direction is not the most efficient, as the steepest descent method does not pass the Rosenbrock test (see Figure 1). Carefully designed descent directions deviating from the steepest direction can be used in practice to produce faster convergence. Step Length The step length is a non-negative value such that . When choosing the step length , there is a trade off between giving a substantial reduction of and not spending too much time finding the solution. If is too large, then the step will overshoot, while if the step length is too small, it is time consuming to find the convergent point. An exact line search and inexact line search are needed to find the value of and more detail about these approaches will be introduced in the next section. Convergence For a line search algorithm to be reliable, it should be globally convergent, that is the gradient norms, , should converge to zero with each iteration, i.e., . It can be shown from Zoutendijk's theorem that if the line search algorithm satisfies (weak) Wolfe's conditions (similar results also hold for strong Wolfe and Goldstein conditions) and has a search direction that makes an angle with the steepest descent direction that is bounded away from 90°, the algorithm is globally convergent. Zoutendijk's theorem states that, given an iteration where is the descent direction and is the step length that satisfies (weak) Wolfe conditions, if the objective is bounded below on and is continuously differentiable in an open set containing the level set , where is the starting point of the iteration, and the gradient is Lipschitz continuous on , then , where is the angle between and the steepest descent direction . The Zoutendijk condition above implies that , by the n-th term divergence test. Hence, if the algorithm chooses a search direction that is bounded away from 90° relative to the gradient, i.e., given , , it follows that . However, the Zoutendijk condition doesn't guarantee convergence to a local minimum but only stationary points. Hence, additional conditions on the search direction is necessary, such as finding a direction of negative curvature whenever possible, to avoid converging to a nonminimizing stationary point. Exact Search Steepest Descent Method Given the intuition that the negative gradient can be an effective search direction, steepest descent follows the idea and establishes a systematic method for minimizing the objective function. Setting as the direction, steepest descent computes the step-length by minimizing a single-variable objective function. More specifically, the steps of Steepest Descent Method are as follows: Steepest Descent Algorithm Set a starting point Set a convergence criterium Set Set the maximum iteration While : - If : - Break - End if End while Return , One advantage of the steepest descent method is the convergency. For a steepest descent method, it converges to a local minimum from any starting point. Theorem: Global Convergence of Steepest Descent Let the gradient of be uniformly Lipschitz continuous on . Then, for the iterates with steepest-descent search directions, one of the following situations occurs: - for some finite However, steepest descent has disadvantages in that the convergence is always slow and numerically may be not convergent (see Figure 1.) Steepest descent method is a special case of gradient descent in that the step-length is analytically defined. However, step-lengths cannot always be computed analytically; in this case, inexact methods can be used to optimize at each iteration. Inexact Search While minimizing an objective function using numeric methods, in each iteration, the updated objective is , which is a function of after fixing the search direction. The goal is to minimize this objective with respect to . However, if one wants to solve for the exact minimum in each iteration, it could be computationally expensive and the algorithm will be time-consuming. Therefore, in practice, it is easier solve the subproblem numerically and find a reasonable step length instead, which will decrease the objective function. That is, satisfies . However, convergence to the function's minimum cannot be guaranteed, so Wolfe or Goldstein conditions need to be applied when searching for an acceptable step length. Wolfe Conditions This condition is proposed by Phillip Wolfe in 1969. It provide an efficient way of choosing a step length that decreases the objective function sufficiently. It consists of two conditions: Armijo (sufficient decrease) condition and the curvature condition. Armijo (Sufficient Decrease) Condition , where and is often chosen to be of a small order of magnitude around . This condition ensures the computed step length can sufficiently decrease the objective function . Only using this condition, however, it cannot be guaranteed that will converge in a reasonable number of iterations, since Armijo condition is always satisfied with step length being small enough. Therefore, the second condition below needs to be paired with the sufficient decrease condition to keep from being too short. Curvature Condition , where is much greater than and is typically on the order of . This condition ensures a sufficient increase of the gradient. This left hand side of the curvature condition is simply the derivative of , thus ensuring to be in the vicinity of a stationary point of . Strong Wolfe Curvature Condition The (weak) Wolfe conditions can result in an value that is not close to the minimizer of . The (weak) Wolfe conditions can be modified by using the following condition called Strong Wolfe condition, which writes the curvature condition in absolute values: . The strong Wolfe curvature condition restricts the slope of from getting too positive, hence excluding points far away from the stationary point of . Goldstein Conditions Another condition to find an appropriate step length is called Goldstein conditions: where . The Goldstein condition is quite similar with the Wolfe condition in that, its second inequality ensures that the step length will decrease the objective function sufficiently and its first inequality keep from being too short. In comparison with Wolfe condition, one disadvantage of Goldstein condition is that the first inequality of the condition might exclude all minimizers of function. However, usually it is not a fatal problem as long as the objective decreases in the direction of convergence. As a short conclusion, the Goldstein and Wolfe conditions have quite similar convergence theories. Compared to the Wolfe conditions, the Goldstein conditions are often used in Newton-type methods but are not well-suited for quasi-Newton methods that maintain a positive definite Hessian approximation. Backtracking Line Search The backtracking method is often used to find the appropriate step length and terminate line search based. The backtracking method starts with a relatively large initial step length (e.g., 1 for Newton method), then iteratively shrinking it by a contraction factor until the Armijo (sufficient decrease) condition is satisfied. The advantage of this approach is that the curvature condition needs not be considered, and the step length found at each line search iterate is ensured to be short enough to satisfy sufficient decrease but large enough to still allow the algorithm to make reasonable progress towards convergence. The backtracking algorithm involves control parameters and , and it is roughly as follows: - Choose - Set - While - End while - Return Numeric Example As an example, we can use line search method to solve the following unconstrained optimization problem: . First iteration: We have . Starting from gives . We then have . Taking partial derivative of the above equation with respect to and set it to zero to find the minimizer . Therefore, . Second iteration: Given , we have . Then from , finding the minimizer . Hence, . Third iteration: Given , we have have . Then from , finding the minimizer . Hence, . Fourth iteration: Given , we have have . Then from , finding the minimizer . Hence, . Fifth iteration: Given , we have have . Then from , finding the minimizer . Hence, . Termination: At this point, we find . Check to see if the convergence is sufficient by evaluating : . Since is relatively small and is close enough to zero, the line search is complete. The derived optimal solution is , and the optimal objective value is found to be . Applications A common application of line search method is in minimizing the loss function in training machine learning models. For example, when training a classification model with logistic regression, gradient descent algorithm (GD), which is a classic method of line search, can be used to minimize the logistic loss and compute the coefficients by iteration till the loss function reaches converges to a local minimum. An alternative of gradient descent in machine learning domain is stochastic gradient descent (SGD). The difference is on computation expense that instead of using all training set to compute the descent, SGD simply sample one data point to compute the descent. The application of SGD reduces the computation costs greatly in large dataset compared to gradient descent. Line search methods are also used in solving nonlinear least squares problems, in adaptive filtering in process control, in relaxation method with which to solve generalized Nash equilibrium problems, in production planning involving non-linear fitness functions, and more. Conclusion Line Search is a useful strategy to solve unconstrained optimization problems. The success of the line search algorithm depends on careful consideration of the choice of both the direction and the step size .This page has introduced the basic algorithm firstly, and then includes the exact search and inexact search. The exact search contains the steepest descent, and the inexact search covers the Wolfe and Goldstein conditions, backtracking, and Zoutendijk's theorem. More approaches to solve unconstrained optimization problems can be found in trust-region methods, conjugate gradient methods, Newton's method and Quasi-Newton method. Reference - ↑ 1.0 1.1 1.2 1.3 1.4 1.5 J. Nocedal and S. Wright, Numerical Optimization, Springer Science, 1999, p. 30-44. - ↑ "Rosenbrock Function," Cornell University. - ↑ R. Fletcher and M. Powell, "A Rapidly Convergent Descent Method for Minimization," The Computer Journal, vol. 6, no. 2, pp. 163–168, 1963. - ↑ 4.0 4.1 R. Hauser, "Line Search Methods for Unconstrained Optimization," Oxford University Computing Laboratory, 2007. - ↑ P. Wolfe, "Convergence Conditions for Ascent Methods," SIAM Review, vol. 11, no. 2, pp. 226–235, 1969. - ↑ A. A. Tokuç, "Gradient Descent Equation in Logistic Regression." - ↑ M. Al-Baali and R. Fletcher, "An Efficient Line Search for Nonlinear Least Squares," Journal of Optimization Theory and Applications, vol. 48, no. 3, pp. 359-377, 1986. - ↑ P. Lindström and P. Å. Wedin, "A New Linesearch Algorithm for Nonlinear Least Squares Problems," Mathematical Programming, vol. 29, no. 3, pp. 268-296, 1984. - ↑ C. E. Davila, "Line Search Algorithms for Adaptive Filtering," IEEE Transactions on Signal Processing, vol. 41, no. 7, pp. 2490-2494, 1993. - ↑ A. von Heusinger and C. Kanzow, "Relaxation Methods for Generalized Nash Equilibrium Problems with Inexact Line Search," Journal of Optimization Theory and Applications, vol. 143, pp. 159–183, 2009. - ↑ P. Vasant and N. Barsoum, "Hybrid Genetic Algorithms and Line Search Method for Industrial Production Planning with Non-linear Fitness Function," Engineering Applications of Artificial Intelligence, vol. 22, no. 4-5, pp. 767-777, 2009.
https://optimization.cbe.cornell.edu/index.php?title=Line_search_methods
tion. Parents and Children. To create the next generation, the genetic algorithm selects certain individuals in the current popula- tion, called parents, and uses them to create individuals in the next generation, called children. Typically, the algorithm is more likely to select parents that have better fitness values. The genetic algorithm uses three main types of rules at each step to create the next generation from the current population. Selection rules select the individuals, called parents, that contribute to the population at the next generation. Crossover rules combine two parents to form children for the next generation. Mutation rules apply random changes to individual parents to form children. The algorithm creates crossover children by combining pairs of parents in the current population. At each coordinate of the child vector, the default crossover function ran- domly selects an entry, or gene, at the same coordinate from one of the two parents and assigns it to the child. The algorithm creates mutation children by randomly changing the genes of individual parents. By default, the algorithm adds a random vector from a Gaussian distribution to the parent. The genetic algorithm uses the following conditions to determine when to stop: Generations — The algorithm stops when the number of generations reaches the value of Generations. Time limit — The algorithm stops after running for an amount of time in sec- onds equal to Time limit. Fitness limit — The algorithm stops when the value of the fitness function for the best point in the current population is less than or equal to Fitness limit. Stall generations — The algorithm stops when the weighted average change in the fitness function value over Stall gen- erations is less than Function tolerance. Stall time limit — The algorithm stops if there is no improvement in the objective function during an interval of time in sec- onds equal to Stall time limit. Function Tolerance — The algorithm runs until the weighted average change in the fitness function value over Stall genera- tions is less than Function tolerance. Nonlinear constraint tolerance — The Nonlinear constraint tolerance is not used as stopping criterion. It is used to determine the feasibility with respect to nonlinear constraints. normalized path repre- sentation is, a list (1,2,3,4) means the tour going from city 1 to city 2, 3, 4, and back to city 1. With the traditional path representation, different lists such as (4,1,2,3), (3,4,1,2), and (2,3,4,1) all represent the same tour as (1,2,3,4). Simple GA have their own problems. Vary- ing the amount of disruption of good subso- lutions by mutation and crossover operators presents a tradeoff between diversity and performance (exploration and exploitation). GA using selection alone can not generate solutions outside the population. Crossover and mutation generate new solutions, but with certain limitations. Crossing nearly identical string yields offspring similar to the parent strings. Consequently, crossover can not reintroduce diversity. Mutation, on the other hand, can generate the full solution space, but may take an excessively long time to yield a desirable solution. In addition, it is not easy to regulate a GA convergence. Tuning global parameters such as population size, mutation probability and crossover probability, has been the most recommended technique for control- ling premature convergence in the GA. A generally effective method for setting pa- rameters has not yet been demonstrated. Ideal parameters are likely problem- dependent.
https://jurnal.unikom.ac.id/_s/data/jurnal/v08-n01/volume-81-artikel-12.pdf/index4.html
Design Optimization deals with finding the maximum and minimum of one or more objective functions by altering a set of design variables, and can be subject to constraints. Design optimization can be used at the different length scale models and materials similar to every ICME notion. The factors involved in the optimization process are further explained below: - Design variables: A design variable is a specification that is controllable by the designer (eg., thickness, material, etc.) and are often bounded by maximum and minimum values. Sometimes these bounds can be treated as constraints. - Constraints: A constraint is a condition that must be satisfied for the design to be feasible. Examples include physical laws, constraints can reflect resource limitations, user requirements, or bounds on the validity of the analysis models. Constraints can be used explicitly by the solution algorithm or can be incorporated into the objective using Lagrange multipliers. - Objectives: An objective is a numerical value or function that is to be maximized or minimized. For example, a designer may wish to maximize profit or minimize weight. Many solution methods work only with single objectives. When using these methods, the designer normally weighs the various objectives and sums them to form a single objective. Other methods allow multi-objective optimization, such as the calculation of a Pareto frontier. - Pareto Frontier: It is relatively simple to determine an optimal solution for single objective methods (solution with the lowest objective function). However, for multiple objectives, we must evaluate solutions on a “Pareto frontier.” A solution lies on the Pareto frontier when any further changes to the parameters result in one or more objectives improving with the other objective(s) suffering as a result. Once a set of solutions have converged to the Pareto frontier, further testing is required in order to determine which candidate force field is optimal for the problems of interest. Be aware that searches with a limited number of parameters might “cram” a lot of important physics into a few parameters. - Models: The designer must also choose models to relate the constraints and the objectives to the design variables. They may include finite element analysis, reduced order metamodels, etc. - Reliability: the probability of a component to perform its required functions under stated conditions for a specified period of time. - Metamodeling: A metamodel (or surrogate model) provides a quick way to approximate a function response when an analytical solution is not available, or is computationally expensive. see Metamodeling and Metamodeling-Wikipedia Optimization Methods Zeroth-Order Methods These methods are referred to as “zeroth-order methods” because they require only evaluation of the function, f(X), in each iterative step. Some examples of zeroth-order methods are the Bracketing Method and the Golden Section Search Method. Some population based methods could also be categorized as zeroth-order methods . Bracketing Method The Bracketing method is a zeroth-order method which used progressively smaller intervals to converge to an optimal solution. The interval is set up such that the x value corresponding to the optimal value of f lies within the interval. The interval is then divided into any number of sub-intervals of any given length. At each dividing point the value of f is calculated. The optimum sub-interval is then chosen as the next interval. This process iterates until convergence criteria is met . First-Order Methods In addition to evaluation of f(X), first-order methods require the calculation of the gradient vector ∇f(X) in each iterative step. Some examples of first-order methods are the Steepest Descent or Cauchy Method and the Conjugate Gradient Method. Steepest Descent (Cauchy) Method The Steepest Descent method uses a search direction of some magnitude in the negative direction of the gradient. The negative of the gradient gives the direction of maximum decrease, hence steepest descent. The magnitude of the constant for the search direction can be determined through zeroth-order methods or from direct calculation. The direct calculation is done by setting the derivative equal to zero and solving for the constant. This method is guaranteed to converge to a local minimum, but convergence may be slow as previous iterations are not considered in determining the search direction of subsequent iterations. The rate of convergence can be estimated using the condition number of the Hessian matrix. If the condition number of the Hessian is large convergence will be slow . Conjugate Gradient Method The Conjugate Gradient Method is similar to the Steepest Descent Method except that it takes into consideration previous iterations when choosing search directions. The conjugate direction is determining by adding the steepest descent direction of the previous iteration, scaled by some value, to the steepest descent direction of the current iteration. The constant used to scale the search direction of the previous iteration can be determined using either the Fletcher-Reeves formula or the Polak-Ribiere formula . Second-Order Methods Second-order methods take advantage of the Hessian matrix, the second derivative, of the the function to improve search direction and rate of convergence. Some examples of second-order methods are Newton's Method, Davidon-Fletcher-Powell (DFP) method, and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method . Population-Based Methods Population based methods generate a population of points throughout the design space. Some methods then specify a range of the best points and generate a new population, continuing until convergence is reached (Monte-Carlo Method). Others generate a population and then "evolve" the points. The weakest of the new population is eliminated and the remainder evolved again until convergence is reached (Genetic Algorithm). Monte-Carlo Method see Monte-Carlo Genetic Algorithm Genetic algorithms are based on the principles of natural selection and natural genetics, meaning reproduction, crossover, and mutation are involved in the search procedure. The design variables are represented as strings of binary numbers which mirror chromosomes in genetics. These strings allow for the different binary numbers, or bits, to be adjusted during the reproduction, mutation, and crossover stage. A population of points is used, and the number of initial points is typically two to four times the number of design variables. These points are evaluated to provide a fitness value, and above average points are selected and added to a new population of points. Points in this new population undergo the second stage in the algorithm known as crossover. In this stage information from two "parent" points, or strings, is combined to produce a new "child" point. The mutation operator is optional. It selects points based on a user-defined probability and alters a bit in the points binary string, thereby maintaining diversity in the population. The process is iterated until convergence is reached. GAs differ from other optimization techniques in that they work with a coding of the parameter set and not the parameters themselves, search a population of points instead of a single point, and use objective function knowledge instead of derivatives or other auxiliary knowledge Tutorials Structural Scale Optimization One of the most prevalent uses of optimization occurs in the design of structures. Common applications in this area consist of the reduction of weight subject to strength requirements, maximizing energy absorption in crashworthiness scenarios, and topology optimization. In a simple case, analytical solutions for objective functions are solved while altering design variables subject to constraints. This optimization can be performed in a software such as MATLAB. In a more complex example, objective functions can be solved by a finite element software such as Abaqus, LS-DYNA, etc. Because the optimization algorithm sometimes requires hundreds or even thousands of iterations, optimization can become unfeasible when directly coupled to a computationally expensive finite element simulation. A suitable alternative is the use of metamodels. Metamodels offer a fast analytical approximation of a complex, expensive objective function. Metamodels approximate responses of these functions over a predefined design space. In order to create a metamodel, objective function values must calculated using a full scale simulation at "training points" sampled throughout the design space. These training points can be found using a Design of Experiments (DoE). Some tutorials for DoE can be found at the following links: 1, 2, 3, and 4. Once the DoE points and their objective function values are found, the data is used to "train" the metamodel. After training, the metamodel can be directly used in place of the full scale simulations to calculate objective functions in a much faster manner. Metamodels can also be used in Monte Carlo simulations to quantify uncertainty when many calculations are necessary. A comparative study of metamodeling methods for multi objective crashworthiness optimization Authors: Howie Fang, Masoud Rais-Rohani ([email protected]), Z. Liu, and Mark Horstemeyer http://www.sciencedirect.com/science/article/pii/S0045794905001355 Analytical Model for Axial Crushing of Multi-cell Multi-corner Tubes (Multi-CRUSH) Contributers: Ali Najafi and Masoud Rais-Rohani Topology Optimization of Continuum Structures Using Element Exchange Method Authors: Mohammad Rouhi ([email protected]) and Masoud Rais-Rohani ([email protected]) Topology Optimization of Continuum Structures Using Element Exchange Method Authors: Mohammad Rouhi ([email protected]) and Masoud Rais-Rohani ([email protected]) http://pdf.aiaa.org/preview/CDReadyMSDM08_1875/PV2008_1707.pdf Element Exchange Method for Topology Optimization Authors: Mohammad Rouhi ([email protected]), Masoud Rais-Rohani ([email protected]) and Thomas Neil Williams ([email protected]) http://springerlink.com/index/m30m6x1x62k252lr.pdf Macroscale Optimization algorithms can be used for model calibration. For example, the DMGfit for metals, TP for polymers, and MSFfit routines employ optimization algorithms to automatically fit the plasticity-damage model and the fatigue model, respectively. The constants of interest are selected and a Monte Carlo optimization routine is performed to generate candidate constants. A single element simulation then produces the model stress-strain curve. The curve is compared to the input data for fit comparison, and this process is repeated until a satisfactory fit is achieved or a maximum number of iterations is reached. The resulting optimized constants are then output. Mesoscale Microscale Nanoscale The Embedded Atom Method (EAM) and Modified Embedded Atom Method (MEAM) potentials can be optimized based upon on Electronics Scale calculation results and experimental data. See MEAM Potential Calibration. Electronic Scale Multilevel Design Optimization This is an emerging topics at CAVS. The pages describing the progress are currently available only to the members of the research team. References - ↑ 1.0 1.1 1.2 1.3 1.4 Rais-Rohani,Masoud “Handout #3: Mathematical Programming Methods for Unconstrained Optimization,” Design Optimization Class, Mississippi State University, Spring 2012. - ↑ Rao, S.S., “Genetic Algorithms,” Engineering Optimization: Theory and Practice, John Wiley and Sons, Inc., 2009, pp. 694-702. - ↑ Goldberg, D.E., Genetic Algorithms in search, optimization, and machine learning, Addison Wesley Longman, 1989. Pages in category "Optimization" The following 4 pages are in this category, out of 4 total.
https://icme.hpc.msstate.edu/mediawiki/index.php/SRCLID:Simulation-Based_Design_Optimization.html
algorithmic modeling for Rhino I'm trying to use the optimizer of Karamba for the beams created for these structures in Voronoi and Voronax but nothing works. Can someone pls help me? it's for my thesis and I wanted to do a comparison of these two structures using Karamba and Galapagos. But it tells me to use beams of 1m of height that's too much! How can I tell the program to impose the max displacement at 20cm? since my structure is 50x25 and 15 tall. Pls I need some help here Views: 361 Hi Alexia, Well first of all what is your "fitness function" goal, i.e what are you trying to reduce or maximize? is it just minimizing the displacement of your structure? Why is galapagos only controling the cross section`s dimensions and not also the parameters of your shell?. My experience with galapagos is that the most important part is coming up with a proper fitness function which is basically a mathematical formula in which you inform the solver what you want it to search for and the optimal target for the fittest solution. Coming up with a good fitness function is often a challenge and it needs to be mathematically correct in order for you to have interesting and relatively precise results, otherwise it will not do what you expect it do do. Fitness functions become more critical as your model is more complex. I suggest that you start digging in to genetic algorithms so you can also re-evaluate your search if you need to, and understand the idea more in depth. also Octoupus is another good evolutionary solver which focuses on multi-objective search and you can determine inside the component the min and max values of your fitness function. In this case it can be better to use Octopus over Galapagos if you have more that one fitness function as a workaround in coming up with a proper mathematical formula. I hope this helped a little, good luck with your work Nicholas My intention is not to redimension the shel but only to find the most appropriate beam dimension with the minimum displacement. But not of the order of millimiters as it does but in lets say 20cm of displacement. more or less. Hi Alexia, One thing that you could do would be to use the CrossSection Optimizer algorithm in Karamba. Input a list of cross-sections from which the algorithm can look for in order to perform the structural optimization according to Eurocode 3, and define the expected level of utilization you would require. Fuerthermore, you can input a maximum deformation value (in meters) too so that the algorithm takes it into account. Find attached a reworked file. Hopes this helps, Rafael yes my intention is to use both also the optimizer of Karamba and the galapagos. to see aftewards in Sap2000 which is the best solution. But I can't make it work :( :( :( since this is the simpliest example of my thesis and I can't make it work.... I'm so stressed out, since I loved Karamba and the paramatric world I don't want to surrender! thank you so much for your answer. can I make you a couple of questions?? at the cross section selector at the voice Name|id you put 0 y? also the same at the optimizer the max util u put 0.6 what that supposed to be? and the last question y at the supports you put the tx tz ty only and not all? my prov told me to impose all of them also the rotations at the base of the structure. Thank you so much for the answer and for the file. you saved my life! :P Hi Alexia, It's great to know that the file worked! Regarding your questions, at the Name|Id (which means Name or Index) I input a zero so that the component knows that it should take the first Cross Section and assign it to my elements as my first version for design. At the Optimizer, the 0.60 means "try to make my elements work at most at 60% (out of 100%) level of utilization". Well, in practice it's more realistic to impose "pinned" supports (only displacements are restrained) because otherwise your foundations would need to be design to resist bending moments, and there is not such a thing as a perfectly "rigid" support (rotations and displacements fixed). However, you can check both options and evaluate at the outcomes by yourself. That's part of the fun in Karamba! Best, Rafael and the last question, I've seen that you put a mesh load of 4KN going up. how come? I did have a look, but maybe it's me that can't understand the difference through mesh and model. If I applicate a load on the mesh is not as if I applicate it at the model? :) my question was why upwards the load at the mesh? and not downwards? not z+ but z- so it can be a load of the snow lets say and not a pressure load that goes upwards. I don't know if I explain myself....
https://www.grasshopper3d.com/group/karamba3d/forum/topics/optimizer-beam-karamba-galapagos?page=1&commentId=2985220%3AComment%3A1644219&x=1
- 1 year of working experience providing customer service over the phone. - Basic proficiency in Microsoft Word and Outlook. Qualifications - Answers and screens calls in a professional, courteous, and timely manner. - Connects calls for patients, doctors, and employees requiring assistance. - Announces in-house calls for voice paging, beeper pages, emergency calls, other codes, and other information requests. - Announces various codes in accordance with emergency plans. - Receives Code Blue and Rapid Response for inpatients phone calls, follows through, and completes the Code Blue and Rapid Response processes. - Answers and transfers video calls for TeleHealth program. - Provides direction to NCHS locations. - Records emergency and in-house messages for physicians when offices are closed. - Reports equipment that is not in good working condition in a timely manner. - Updates NCHS on-call schedule Knowledge/Skills/Abilities: - Able to communicate effectively in English both verbally and in writing in a clear, concise, courteous, and prompt manner with all internal and external customers. - Fluent in Spanish strongly preferred. - Able to maintain confidentiality of sensitive information. - Able to follow complex and detailed written and/or verbal instructions to solve problems. - Able to establish necessary professional relationships, and interact effectively with internal and external customers. - Able to adapt and react calmly under stressful conditions in pleasant manner. - Able to learn work related software application(s), and effectively use them. - Able to relate cooperatively and constructively with customers and co-workers. (EOE DFW) 8/13/19 Job : Clerical/Administrative Primary Location : Florida-Miami-Nicklaus Children's Hospital - Main Hospital Campus Department : COMM-SWITCH BOARD OP-1000-954903 Job Status :Part Time with Benefits Joining a new organization can be daunting or overwhelming. But at Nicklaus, your colleagues make you feel welcomed. They taught me to celebrate our accomplishments and band together during challenging times. This is not easy to find in healthcare these days, so I’m grateful to have found an amazing place to work with people and leadership who have my well-being in mind.Lova Renee Brunson Manager Accreditation & Regulatory Collaboration leads to success. Alone we can do so little; together we can do so much. Working together, we find solutions and methods we would never find alone, and at Nicklaus Children’s, we have fun doing it. Responsibility drives us. We all take responsibility here – for the children, their families, our work and each other. We meet our responsibilities head on and motivate each other to succeed. Empower yourself, help others. Nicklaus Children’s encourages team members to believe in their ability to affect positive change in the world through everything they do with us. Advocate for the right way. Advocate for children. Advocate for families. Advocate for yourself. But most of all, advocate for getting the job done right and you will find nothing but success and support in your career here. Transformation is growth. At Nicklaus Children’s, change is not to be feared. When you work here, you are always working with the most advanced tools and procedures available. Empathy is everything. We expect our team members to have empathy for the patients and families they treat, and in turn, we have empathy for them. We take care of everyone in the Nicklaus Children’s family with competitive benefits and our supportive culture.
https://careers.nicklaushealth.org/job/miami/telephone-system-operator-part-time-night-shift/35874/11051338368
Noel Recruitment are currently recruiting Auditors for our client to work on a full-time four month contract. Your role will be to assist the Senior Auditor in effectively delivering audits and/or reporting topics in compliance with requirements (standards and budgets) including taking responsibility for end-to-end delivery of some audits/projects - Planning, executing and reporting financial and other audit/reporting work to client standards and in a timely manner. - Developing (or assisting in developing) audit/examination approaches as required. - Where appropriate, undertaking supervision, review and/or other activities where required. - Maintaining and updating professional knowledge by identification of own training and development needs, updating skills and attending relevant training courses as agreed with manager. - Responsibility for client relationships (level of responsibility appropriate to client category). - Providing written material which is clear and concise and addresses the relevant issues. - Exercising appropriate judgment in providing conclusions and meaningful recommendations Essential Experience: Qualifications/Experience - Members of an accountancy body recognised in Ireland - A minimum of one-year post qualification experience. Communication: - Good oral and written communication skills are essential and able to communicate effectively and in a professional manner when dealing with clients. - Written communication skills are required for drafting reports on the audit and management letters. Desirable Requirements: - Technical Ability – A good understanding of technical auditing and accounting issues as well as auditing experience within the past five years. - Public Sector Accounting/Auditing – Experience of auditing and/or accounting in the public sector. - Specific Competencies – Evidence of good analytical and decision-making skills and an ability to deliver results in auditing/accounting environments.
https://noelgroup.ie/job/auditor-fixed-term-contract/
Part Time Switchboard Operator To fulfil a key organisational role by providing an efficient and friendly response to our clients and prospective customers when they call our main switchboard. To effectively communicate and direct calls to the relevant departments in an efficient and professional manner. - Closing Date:TBC - Job Location:Nelson, Head Office - Contract TypePart Time - Salary£9266.40 - Hours20 hours per week, 13:00 – 17:00 Typical Responsibilties - To answer the main switch board in a friendly, professional and timely manner - Greet customer and visitors in a professional manner and ensuring they are aware of signing in and company procedure whilst on site. - Tannoy announcements - Dealing with incoming emails to distribute as appropriate. Person Specification Candidates shall be able to demonstrate the necessary qualifications, experience, skills, and traits to meet the requirements set below. Requirements for the role shall be evidenced on the application form and in the interview process. Please use the following as guidance when completing the further information section of your application form.
https://www.protec.co.uk/careers/job-vacancies/job/?jobID=24
• Respond to and process all customer calls in a prompt and professional manner. • Determine the probable nature of each call by listening carefully, researching customer accounts and asking questions for clarification. • Provide clear information to callers as needed. • Communicate clearly and respectfully with callers at all times, restating information when necessary to insure the caller’s understanding. • Inform callers of any fees, policies or procedures that may affect the outcome of their call. • Take and process customer payments as necessary. • Input clear, complete and concise documentation in call logs. • Follow-up on pending calls in a timely manner and send fax-required information to district offices as needed. • Contact district office staff with pertinent information about customer calls as outlined. • Adhere to service level and telephone availability standards. • Works with rest of team to identify issues that impede quality customer care. • Adhere to daily work schedule. • Follow all Call Center polices and procedures. • Attend & participate actively in all team meetings and training sessions. • Participate in any outbound calling campaigns as needed. • Good oral and written communication.
https://careers.ugicorp.com/AmeriGas/job/Rocklin-Customer-Care-Agent-CA-95677/536767000/
The Regional Medical Liaison will serve as a key field-based scientific resource for healthcareproviders, industry partners and internal colleagues. This role works collaboratively with nearly allfunctional groups at HeartFlow, including Clinical, Professional Education, Medical Communications,Market Access, Technology and Commercial. Internally, this role is the clinical subject matter expertregarding coronary artery disease diagnosis and management, and HeartFlow products, ensuring productclinical messaging, clinical data development, and physician education activities are aligned toHeartFlow’s business priorities. Specifically, some key internal activities of the Regional Medical Liaisoninclude sharing in field market insights to inform education and clinical strategy, supporting thedevelopment of educational content and training and facilitating clinical evidence disseminationcross-functionally. Externally, the Regional Medical Lead represents HeartFlow as a key scientificresource for HCPs and other stakeholders, identifies and establishes professional relationships withKOLs, and provides scientific guidance and coaching to educational faculty.This ideal candidate is proactive, detail-oriented and exhibits excellent facilitation and communicationskills (both oral and written). Candidates must be comfortable with travel and supporting a largegeography.Job Responsibilities:● Establish and foster professional relationships with national and regional Key Opinion Leaders(KOLs) and internal business partners● Support the identification and training of educational faculty● Serve as the key scientific representative of HeartFlow for HCPs, providing deep and advanceddisease state and therapy information● Support clinical education in the field including data and case reviews● Gather feedback and insights from KOLs and physician advisors to better inform HeartFlow’soverall strategic direction● In collaboration with the Clinical team at HeartFlow, facilitate initial discussions, intake, andevaluation of physician-initiated research proposals, including ensuring timely communicationbetween requestor and company● Understand and effectively communicate current scientific knowledge, maintaining expertisethrough keeping up to date with publications and attendance of national and local congresses● Monitor major meetings for abstract deadlines and work with investigators of HeartFlowsponsored research to drive podium presence in support of corporate communication strategiesand plan● Support field and internal training needs as a clinical expert Skills Needed:● Intellectual curiosity and intelligence about the field of science/medicine for which they areresponsible● Demonstrated ability to comprehend, synthesize and communicate large amounts of scientificcontent in a clear, concise fashion● Significant experience and success in:o self-managing priorities and multitasking projectso building relationships with KOLso educating HCPs and coaching educational facultyo applying critical thinking to scientific and clinical research challenges● Establishing trust and credibility within the HCP community including primary care providers,advanced practice providers, and subspecialist cardiologists, radiologists, and cardiac surgeons.● Excellent teamwork and interpersonal skillsEducational Requirements & Work Experience:● Advanced clinical or scientific degree (PharmD, PhD, or MD/DO, MS, PA, ARNP) preferred● Minimum of 5 years of Cardiovascular experience● Experience in pharma, biotech and or/medical devices, ideally as a medical science liaison (MSL)● Significant direct experience communicating complex scientific information in a manner thatmeets the needs of external healthcare practitioners and internally at all levels of the organization● Advanced presentation and computer skills with expertise in literature identification andevaluation.Physical Demands of the Job:Significant travel (~50% of time) both by automobile and air. - Job Type - Full Time - Salary - N/A - Experience - N/A - Posted - 63 days ago Similar Jobs from Partners More Jobs Pharmacy Technician Aide (Enlistment Required) US Army / Baytown 22 hours ago Music Teacher (2022-2023 School Year) ACCEL Schools / null 22 hours ago Regional Medical Liaison- North or Northeast HeartFlow, Inc / Chicago, Illinois, United States 63 days ago Field Billing Manager (Mountain West) HeartFlow, Inc / Mesa, Arizona, United States 63 days ago Business Architect Knab / Not Specified 63 days ago ParallelDesk News Stellenmarkt des Kölner Studierendenwerks - Deine Stadt. Dein Job. Srini | 21 December 2022
https://paralleldesk.com/job-details/4kel14yjxd2-regional-medical-liaison-south-or-southeast
Hours - 35 hours per week, 8 - 6 Monday to Friday and 9 - 1 Saturday. Due to the nature of this position, hours may vary in line with business and client needs. We are looking for a Customer Service Advisor for a Global Insurance company based in Croydon. You must have Excellent Customer Service skills, a good telephone manner and the ability to work well under pressure is essential with a flare for Sales. The Role - Maximise Finance and Insurance policy sales by utilising your sales skills when handling telephone enquiries and updating systems accordingly. Provide a professional, efficient and proactive sale and administration service, reflecting the brand values of us and the client you are representing. Ensure all processes and procedures comply with FCA requirements. Main Duties - Call Handling o Handle all calls within performance targets and professionally following approved call scripts and sales guidance materials o Handle customer objections in a positive manner and actively attempt to overcome these objections in line with sales guidance materials o Actively attempt to build rapport with all callers o Actively look for opportunities to upgrade levels of cover through effective listening and identification of customer needs o Carry out outbound sales activity ensuring all regulatory and customers service standards and requirements are adhered to where appropriate o Display appropriate levels of patience and empathy as and when required o Handle complaints in a positive way, in line with company procedures o Communicate with customers and third parties in a clear, concise and professional way o To capture all requested data and provide information to the caller o Ensure product knowledge is kept continuously up-to-date through appropriate research and training Correspondence/administration o Respond to incoming correspondence in accordance with agreed procedures o Carry out administrative tasks as required within the department Data capture & input o Ensure that all relevant data is entered onto the system in a timely way, with a focus on accuracy FCA Compliance o Operate within and adhere to the constraints of the current FCA regulations o Ensure that the sales process carried out follows the procedure based sales approach Accurate data capture and Reporting o Capture all necessary information precisely and accurately relevant to product sales o Record all daily activity for reporting purposes Other o To undertake other ad hoc duties as reasonably requested by your manager Qualifications:
https://www.brookstreet.co.uk/job/customer-service-adviser-10/
R3’s Professional Services team works to bring Corda specialist expertise to our customers to make their adoption successful. We engage directly with our customers to design, build, deploy and advise them on their Corda journey to ensure long-term capabilities are sustained. Through a mix of business consulting, technical solutions and implementation, we help customers achieve their goals in the most effective way for their business. Our customer-centric and innovative approach to providing services and solutions in the industry allows us to strategically assess our customers needs and ensure they are set-up for success from the beginning of their journey. In addition to working directly with our customers, we strive to ensure that we are collaborating internally with our Sales, Engineering and Product organisations to provide better tooling, services and products based on our customers evolving needs. The Professional Services Operations function sits within the larger Professional Services team and is critical to the smooth running of the team on a global scale. The Operations function is responsible for managing the planning and forecasting of the team, as well as providing the appropriate tooling, processes and forums to enable the team to thrive. Reporting to the Professional Services Operations Manager, in this role you will serve two purposes; supporting the global Professional Services organisation from a tooling and process perspective, and additionally, directly supporting the Head of Professional Services and Support in day-to-day administrative and planning duties. Responsibilities – Professional Services Operations Duties (60%) - Ensure processes are adopted by the Professional Services and Support teams. - Working with Professional Services Leads to proactively identify and resolve issues, and track impact of team’s work where appropriate. - Ownership and maintenance of shared team internal knowledge space. - Running and maintenance of team systems and tools, such as time-tracking and project management tooling. - Assist in the organisation of team meetings, conference calls and events. - Responsible for administering overall team reporting, analysis and insights, and KPI tracking with the appropriate tooling. - Oversee and support the timely completion of project onboarding and offboarding in the Professional Services portfolio. - Develop close relationships with other key internal functional areas such as Sales Operations, Support Engineering and Product Management. Professional Services Administrative and Planning Duties (40%) - Supporting the Head of Professional Services and Support you will manage an extremely busy and constantly changing diary that includes internal and client facing meetings over various timezones to ensure days run as smoothly as possible. - You will be required to contribute to ad hoc / special projects when necessary and to proactively assist the team in carrying out their objectives. - Supporting meeting preparation; agenda and pre-reading distribution, presentation documents and minute taking. Other Skills and Responsibilities: - Able to communicate in a clear and concise manner and share information/new ideas with other team members and across the organisation. - Ability to dive-deep into processes and drive results. - Ability to take on any task thrown your way, and willingness to be continusouly learning. - Self-starter and team player with ability to meet tight deadlines and balance multiple, competing priorities. - Comfortable with collaborating across multiple business units globally. - Strong attention to detail and analytical skills. - Be comfortable with some ambiguity, and be willing to work beyond the strict wording of the job description in order to achieve the firm’s objectives. Education and Experience: - You have prior experience or exposure to an operations/data function. - You’ve enjoyed or are excited by working in a fast-growth team and company. - You’re excited by working and supporting a globally-dispersed team - You have an interest and understanding of DLT/Blockchain technology - You are enthusiastic, self-motivated and comfortable working in a dynamic environment. Qualifications: - Bachelors Degree or equivalent practical experience. - Experience in Atlassian tools would be considered a plus.
https://blockchain4talent.com/job/operations-and-planning-coordinator-01-22/
3 tips to improve your writing Not everyone is a naturally gifted writer. But it doesn't take a degree in English to improve your writing and enhance your communication. Whether you are writing to investors or your in-laws, the following tips will help you communicate clearly and sound like a pro. 1. Use active voice Quick grammar lesson: Active voice is a sentence where the subject performs the action stated by the verb. Passive voice (the other option) is when the subject is acted upon by the verb. If it sounds confusing, that’s because it is. Let’s look at an example: Passive voice: The exam was failed by over one-third of the class. Active voice: Over one-third of the class failed the exam. In active voice, the subject comes before the verb. This helps ensure that sentences are clear and concise because the reader knows who is doing something before they know what they are doing. An added bonus, active voice requires fewer words to describe what is happening than passive voice. 2. Avoid long, complicated sentences There is a time and a place for compound and complex-compound sentences. However, the longer your sentence becomes the harder it will be for your reader to understand. The below sentence from Faulkner’s “That Evening Sun” is a prime example: The streets are paved now, and the telephone and electric companies are cutting down more and more of the shade trees–the water oaks, the maples and locusts and elms–to make room for iron poles bearing clusters of bloated and ghostly and bloodless grapes, and we have a city laundry which makes the rounds on Monday morning, gathering the bundles of clothes into bright-colored, specially-made motor cars: the soiled wearing of a whole week now flees apparitionlike behind alert and irritable electric horns, with a long diminishing noise of rubber and asphalt like tearing silk, and even the Negro women who still take in white people’s washing after the old custom, fetch and deliver it in automobiles. At what point in the above sentence did you have to start over and read it again? For me, it was the word clusters – 38 words into a 118-word sentence. Keeping sentences short and to the point will help increase understanding and avoid confusion. As a rule of thumb, a sentence should be less than four lines of text on a typical word processor (around 40 words). Remember: Simple sentences are not unintelligent. Confusing sentences are. 3. Don’t use two words where one would do There is a tendency from amateur writers to add unnecessary words to a sentence to sound more intelligent. However, it tends to have the opposite effect. A prime example of this is using multiple words where one will do. For example, writing “in a timely manner” when “quickly” will work. Both phrases mean the same thing, but one more concisely communicates the point. Doing this is also a sign of a large vocabulary, which is what will really make you look intelligent. The best way to cut words from a sentence is to write it how you normally would, then reread it and identify areas that can be condensed. Getting your thoughts on paper is often the hardest part, but a few minutes of editing can make a world of difference. Still struggling to communicate your message clearly? Hiring a professional copywriter will ensure that you deliver clear, concise copy that helps drive decision makers and achieve your goals.
https://www.kbcommunications.co/posts/3-tips-to-improve-your-writing
The Ministry of Tourism, Culture and Sport is committed to providing excellent, high calibre customer service, and to ensuring that our services are timely, responsive, accessible and accountable. Our commitment to excellent customer service includes the: and the following policies and legislated standards: Staff will work together to serve the ministry's clients by providing accurate and clear information in a timely manner regarding questions, processing applications, approvals and payments. Transparent Program guidelines, decision making and payments will be clear, open, and available in both official languages, as well as in alternate formats upon request. Knowledgeable Clients can expect the ministry's funding programs to be clearly communicated, in both official languages (as appropriate), and reviewed by knowledgeable staff. Staff will provide advice that: Collaboration Clients can expect that staff will work together, within the ministry, and across government to ensure that staff provide the best application review possible. Staff will: Quality Clients can expect staff advice and review of their application to be presented in a high quality professional manner. Staff will ensure that: Timeliness Clients can expect that staff will process applications in a timely manner in accordance with the ministry’s business service standards. Working with our clients: Staff will work with clients to ensure staff clearly understand their requests. Staff will provide a timely, high quality response that meets clients’ needs. In order to continue to provide excellent service to clients, staff will work with the client to: Client feedback We welcome and appreciate client feedback about how we provide our services. Staff will encourage and provide opportunities for formal and informal feedback from our clients to ensure that the ministry continues to meet their needs and expectations. Client feedback will be the primary tool to measure all five of the service standards. Find out how to contact us.
http://www.mtc.gov.on.ca/en/about/commitment.shtml
If you are a contract employee, you have probably wondered exactly how to structure your resume in a way that is effective but doesn’t result in a book-sized document. It can be hard to strike a balance between keeping your resume concise but providing enough information that helps hiring managers to understand the breadth and depth of your experience. Use these strategies to list contract work on your resume. Create Two Resumes Contract professionals often have two resumes. The first is a concise resume that they submit to online applications and the second is the resume they present when they are asked for an interview. Why? Applicant Tracking Systems (ATS) scan page one of a resume and score it based on keyword matches. Contract employees need to make page one of this resume as concise yet as information-packed as possible. This can be done by adding a “Relevant Skills” section to the top of page one that leverages keywords from the job description, then organizing job experience from most relevant to least relevant. Pare down each job and contract assignment to a short set of bullet points that cover your most critical achievements. The resume you submit online should also include a link to your LinkedIn profile, so hiring managers can click through to get a broader picture of your experience. When you are called for an interview, send the hiring manager a longer, more detailed resume to review. Organize Experience According To Relevancy As a contractor, chronological resumes may not be the most effective in communicating your fitness for the role. Instead, focus on your experience that is the most relevant to the job you’re applying for. In your longer resume, also include a “Relevant Skills” section at the top, focusing on the skills that align with skills listed in the job description. Then, create a “Relevant Experience” section and organize your past experiences by that criteria, rather than by date. Include a very brief summary of the job or project and a concise bulleted list of your achievements on the project. Be Clear About Contract Experience Listing the start and end dates of contract roles could make you look like a terminal job-hopper rather than a professional contractor. Always include the word “contract” in the job and list the length of that contract. It also pays to list both the recruiting firm you worked for and the employer For example: Developer, 6-Month Contract, XYZ Recruiters | ABC Inc., Tulsa, OK, January 2 – June 30, 2018 This makes it clear that you worked a six-month contract through XYZ Recruiters at ABC Inc. and you didn’t just decide to leave a full-time job. It is also useful to address your contract experience in your cover letter, as well. Are You Looking For Contract IT Jobs? If you are an IT contractor looking for great new projects, the IT recruiters of OakTree Staffing are ready to help. Contact us today to learn more.
https://www.oaktreestaffing.com/2019/08/23/3-things-you-should-look-for-in-a-potential-employer/
Lesson 6 taught you the various roles and expertise of several different types of professionals that may be involved in the life care planning process. When a life care planner needs to collaborate with other professionals in order to obtain recommendations, the life care planner must be able to effectively communicate what is needed in order to effectively establish a foundation for the life care plan and ask questions in a clear and concise manner that can be answered by the appropriate professional. When the life care planner consults with the other members of the treatment team, the life care planner is looking for information based on the current status and future needs of the client. Questions that are too broad, general, or speculative will not provide the life care planner with the information necessary to develop a foundation for the recommendation. Remember, questions should be specific to each professional’s specialty and worded in such a way as to get the information needed. Avoid asking the professional questions that they are unable to answer. For example, it is inappropriate to ask a speech language therapist to project the physical therapy needed by the client. More appropriate questions would focus on the frequency and duration of speech therapy or suggested assistive communication devices. A comprehensive review of medical records should be completed before consultation with members of the treatment team begins. This is important so that you are updated with the case and able to ask appropriate questions. Your ability to communicate in a manner specific to the patient, will show that you are a knowledgeable professional and have an expertise in rehabilitation planning. Some medical professionals and therapists are disinclined to predict future care needs because they do not want to commit to a treatment plan that may later prove erroneous. When asking questions, be sure to use careful wording. For example: Based on Sally’s current status and to prevent complications, how often, ideally, should she be followed in the future?………. times per year for routine care at an estimated office visit cost of $………. per visit (private pay rate please, not insurance or Medicaid allowances). In light of Sally’s current disability, do you anticipate the need for any surgeries to correct her contractures?………If yes, please explain,…………… Although these questions are specific to Sally’s case, the questions are posed in such a way that the replies are based on what is currently known about the client and what can reasonably be expected in the future. When you begin to consult with treatment team members, you may elicit a quicker response if you fax a signed HIPPA release form, a brief questionnaire, and a short letter that defines the purpose of what you are doing. Keep the amount of questions to a minimum and use an easy format for responses. Checklists, fill in the blanks, and yes/no formats will be easier for the consultant to complete and will likely generate a faster response. Include a signature and date line at the conclusion of your form and clearly identify who you are and your fax number or address in order get a faster response.
https://iretprograms.com/courses/all-courses/module-1/module-1-lesson-6-self-assessment/
TennCare is Tennessee’s managed care Medicaid program that provides health insurance coverage to certain groups of low-income individuals such as pregnant women, children, caretaker relatives of young children, older adults, and adults with physical disabilities. TennCare provides coverage for approximately 1.5 million Tennesseans and operates with an annual budget of approximately $12.9 billion. It is run by the Division of TennCare with oversight and some funding from the Centers for Medicare and Medicaid Services (CMS). WHY WORK AT TENNCARE TennCare’s mission is to improve the lives of Tennesseans by providing high-quality cost-effective care. To fulfill that purpose, we equip each employee for active participation and empower teams to communicate and work collaboratively to improve organizational processes in order to make a difference in the lives our members. Because of the positive impact TennCare has on the lives of the most vulnerable Tennesseans, TennCare employees report that their work provides them with a sense of meaning, purpose, and accomplishment. TennCare leadership understands that employees are our most valuable resource and ensures professional and leadership development are a priority for the agency. Job Responsibilities • Supervise a team of Managed Care Specialists who will utilize the TennCare Eligibility Determination System (TEDS) to process renewals, applications, and case changes to determine eligibility for all TennCare programs • Apply advanced knowledge of TennCare’ s business processes and policies to daily work operations • Provide support and guidance to staff on matters relating to Medicaid rules, regulations, policies and procedures • Conduct weekly team meetings to provide timely information regarding processes, initiatives, directives, and training opportunities, as well as discuss staff ideas for continuous work process improvement • Provide clear and concise verbal and written communication and guidance to staff in a positive manner • Ensure all tasks are processed within the designated timeframe by monitoring, assessing, and addressing the team’s overall performance • Navigate, process, and troubleshoot renewals, applications, and case changes with TennCare’ s Tennessee Eligibility Determination System (TEDS) • Monitor staff production and quality by reviewing team performance reports and performing five case reads per Managed Care Specialist • Conduct regular coaching and guidance to ensure staff are meeting goals set forth by the agency and in compliance with TennCare policy and following established business processes • Identify areas of opportunity and provide effective problem-solving techniques to ensure that the areas of opportunity are resolved in a positive manner • Utilize Edison to monitor leave balances, approve/deny leave requests and approving payable time • Conduct monthly conferences timely and utilize the required written format • Utilize the SMART formula to evaluate/rate employee performance throughout the performance review cycle • Build team morale to ensure staff have a positive and inclusive work environment; which leads to higher job satisfaction, and results in greater team efficiencies and effectiveness • Ensure that staff communicate with applicants, members, families, and Authorized Representatives to relay clear information and expectations of the information needed for an accurate eligibility determination • Work collaboratively and proactively with the TennCare Eligibility Director to identify areas of concern and create continuous quality and process improvement plans Qualifications The minimum qualifications are listed on the state of Tennessee’s Careers page link: https://www.tn.gov/careers/apply-here.html.
https://www.cnm.org/job/regular-programs-manager/
- Job Description: - Job Purpose Summary: The Community Mobilization Officer will lead community mobilization and awareness-raising interventions in Gambella refugee camps to prevent incidents of GBV. The Community Mobilization Officer will work with refugee mobilizers and partners to identify any protection concern(s) for women and girls in the camp through discussions, meetings, outreach campaigns and share with actors for appropriate actions KEY DUTIES AND RESPONSIBILITIES Ø Engage community members and structures including RCC (Refuge Central Committee), women’s association, youth and children to design participatory strategies and actions to prevent GBV in the camp. Ø Mobilize and engage community members, structures and partners in prevention and response events in the camp including discussions, trainings etc; Ø Coordinate with partners to integrate GBV prevention activities in their activities; Ø Facilitate community events such as 16 Days of Activism to create awareness on the prevention of and response to GBV; Ø Engage community PSEA focal points to establish community-based complaints mechanisms and to raise awareness toward PSEA; Ø Work closely with partners, including ARRA and UNHCR, and community leaders, and focal points, to develop strategies for preventing violence related to firewood collection; Ø Monitor protection concerns for women and girls in the camp and share concerns with actors providing services; Ø Maintain good relations with community leaders in camp; Ø Maintain records of activities and produce regular t including weekly updates or Situational Reports (SitRep); Ø Contribute to a positive International Medical Corps team environment; Ø Participate in regular camp meeting as requested by the supervisor; Ø Facilitate trainings and workshops on gender and GBV related issues for health care providers, GBV Staff, education providers, UNHCR local authorities, women’s groups, refugee community leaders, religious leaders, youth groups, NGO workers and any other identified groups. Ø Monitor community workers and/or community activists in awareness raising activities, provide training when needed and mentoring on a weekly basis. Ø Develop IEC and BCC materials, in collaboration with GBV team ensuring messages are appropriate for the community and tested before dissemination. Ø Collaborate with GBV Response Officer to ensure ongoing needs of women and girls are met in awareness raising activities; Ø Prepare and submit weekly, monthly and three months work plans in a timely manner and incorporate manager feedback; Ø Develop and review activity and spending plans for new grants. Ø Deal with human resource issues as needed; hire community workers, conduct performance evaluation and terminate staff as needed; Ø Estimate quarterly program purchase requests and monthly cash projections according to the filed requirements and submit requests on time; Ø Ensure that all relevant financial documentations is accurately completed and submitted to finance in a timely manner as required by the organization’s finance policy. - Individual Guiding Principles - Ensure vulnerable women including survivors of GBV are safe at all times; Respect their wishes, rights and dignity; Keep their experiences confidential and do not discriminate them. - Job Requirements: - Required Education and Qualifications - BSc Degree in Social Work, Public Health, Nursing, gender, preferred MSc - A trained community mobilizer in a reputable organization - Background in SGBV, Human Rights and/or Protection. - Experience in participatory techniques and community mobilization - Ability to lead, train, supervise, facilitate and motivate other GBV field workers in their respective tasks in a professional, respectful and supportive manner. - Positive and professional attitude, able to organize, maintain composure and prioritize work under pressure, work over-time when needed and able to coordinate multiple tasks and maintain attention to detail - Ability to work as a member of a team essential - Ability to speak Nuer language is preferred - Ability to communicate well English as well as Write clear and concise reports in English. - How To Apply: - Interested applicants who meet the above requirements should submit their application letter and CV through This Link before OR on March 18, 2021 Female candidates are highly encouraged to apply. Only shortlisted candidates will be contacted. Note: IMC is equal opportunity employer and hence candidate from all background: religion, ethnic group, qualified women and people living with disabilities, etc are all encouraged to apply. International Medical Corps never asks job applicants for a fee, payment, or other monetary transaction. If you are asked for money in connection with this recruitment, please notify International Medical Corps at [email protected].
https://doctorsonlinee.com/2021/03/10/3-international-medical-corps/
"Angela was outstanding - she is extremely knowledgeable in her field and inspired significant confidence. She is a first-rate communicator, extremely proactive and hugely supportive, as well as being highly professional and very personable." "Paul Turner has been amazing throughout the whole process. He's given an incredibly personal and friendly service, and has been so patient and understanding. Nothing has ever been too much trouble, our input has been welcomed, and decisions made easier with Paul's guidance. He is a absolute credit to your company, and to the profession and we feel very fortunate to have met him." "She clearly explained the house selling process; the hitches that could happen. She was friendly and efficient." "Very human, understanding, trustworthy, feeling of honesty" "Excellent service, personal / adaptable / friendly and sensitive during a difficult time." “Very approachable. Kevin's advice was explained in a clear and concise manner, without a lot of technical jargon.” C H, Devizes “Friendly exact explanations on all legal aspects, helpful throughout and always with a smile.” Mr & Mrs C, Westbury “Very thorough, with attention to every detail, instilled confidence that helped everything to progress quickly and smoothly." E G, Bradford-on-Avon "Catherine was very detailed in her approach to a number of issues that arose with the purchase of my property. She delivered a great service and kept me well informed.” A.D. Calne "Helen was extremely knowledgeable, efficient and acted very quickly on our behalf. An excellent service." C.A Mort, Winterbourne "Understanding, clear and honest advice. Transparent with regard to costs. Excellent". Ms M, Melksham "Thank you for all your help, guidance and understanding over the last 2 years. You and your team have been outstanding during a very difficult time and managed the case through to a highly satisfactory conclusion." T.G "My lawyer was very calm and clear and she put me at ease during a very stressful time. The outcome was exactly as I'd hoped." C A, Chippenham "I couldn’t have asked for anyone better than Eleanor, impeccable and timely service. I never had to ask twice and she always came back to me after leaving a message." C.T, Surrey "All-round professional and knowledgeable" C.Y Corsham Click here to request a call back from a member of the Goughs Team.
https://www.goughs.co.uk/site/about_us/testimonials/
Delasco, located in Council Bluffs, is searching for a Customer Service Representative to answer incoming calls and provide product, pricing and order information in response to customer inquiries. You’ll process customer orders, payments, exchanges and returns via phone, fax, email and internet and respond to customer requests in an accurate and timely manner. This Role Will Be Responsible For… • Answer incoming calls, identify and assess customer needs and responds to customer inquiries via phone, email, fax or web in a timely manner. • Provide customers with detailed product specifications, pricing and order information. • Process phone, fax, email, web and catalog orders by accurately enter customer account, pricing, billing and payment information into system and confirm order with customer. The Idea of a Perfect Candidate Is Someone With… • Strong customer service skills and data entry skills. • Ability to speak, read, and write in English is required. • Ability to listen attentively to accurately gather information. • Ability to communicate in a clear, concise and professional manner. • Ability to analyze and effectively resolve customer problems and concerns. • High school diploma or equivalent. Minimum of 2 years previous customer service experience.
https://www.ziprecruiter.com/c/Delasco/Job/Customer-Service-Representative/-in-Omaha,NE?jid=DQ319c46d1d8ada56183b36ea06d09364a&job_id=eaceed93060d4eca0e6c5532bfecae57
If you think a request for information from the National Labor Relations Board is irrelevant and you have more important things to worry about … and you’ll just let it sit awhile … better think again. An NLRB panel recently found that an employer had violated the National Labor Relations Act by failing to respond in a timely manner to a union’s requests for information, even though — as this alert from Barran Liebman points out — “the NLRB ultimately determined that the request for information was, as the employer argued, irrelevant … .” In short, the board found — in its Oct. 23 ruling in the case of IronTiger Logistics Inc. and International Association of Machinists and Aerospace Workers, AFL-CIO — that an employer has a good-faith duty under the NLRA “to respond in a reasonably timely manner to a union request for ‘presumptively relevant’ information—even when the employer believes it may have actual grounds for not providing that information” — this from another alert from Ballard Spahr. “This decision,” it says, “expands the duty of employers by holding that they must respond to requests for what may be irrelevant information.” For the record, and for a complete understanding of the NLRB’s reasoning behind the decision, this link takes you to the actual ruling. (Scroll to the free PDF download marked Oct. 23.) As the ruling states: “The Respondent was obligated to inform the union in a timely manner that it would not provide the information and the reasons for its refusal. An employer cannot simply ignore a union’s information request.” According to Ballard Spahr’s rundown of the case, [It] arose from a dispute about the apportionment of freight-delivery assignments between IronTiger and TruckMovers, two transportation firms that shared common ownership. The union represented IronTiger’s drivers but not TruckMovers’ drivers. After filing a grievance concerning the dispatch of loads to TruckMovers’ drivers, the union requested information related to all units of work dispatched to both companies’ drivers. Four and a half months passed before IronTiger even acknowledged the request, claiming generally that it was ‘harassment, burdensome and irrelevant.’ By then, the union had filed an unfair labor practice charge because IronTiger had provided no response. In a 2-1 vote, the Board affirmed the Administrative Law Judge’s (ALJ) holding that IronTiger had violated Section 8(a)(5) of the [NLRA] by failing to respond in a timely manner to the union’s request for information. The Board began with the well-established premise that ‘a unionized employer must provide, on request, information that is relevant and necessary to the union’s performance of its duties as collective-bargaining representative.’ The Board further stated that ‘an employer must timely respond to a union request seeking relevant information even when the employer believes it has grounds for not providing the information.’ In other words, the NLRB ruled that, because the union’s request for information involved unit employees, it was “presumptively relevant,” entitling the AFL-CIO to a response within a reasonable time. It didn’t define a “reasonable time,” but — says Barran Liebman — “made clear that 4.5 months exceeded this perimeter significantly.” That alert goes on: In dissent, one board member argued that the majority’s ruling gives unions the latitude to ‘hector employers with information requests for tactical purposes that obstruct, rather than further, good-faith bargaining relationships.’ While this opinion governs an employer’s obligation to respond only to a ‘presumptively relevant’ request, it serves as a reminder to employers to pay attention to their response times. An internal deadline of 30 days to respond is prudent, even when the employer’s response simply explains why a particular request is irrelevant.
http://blog.hreonline.com/2012/11/15/better-not-keep-the-nlrb-waiting/
The Office of the Superintendent of Bankruptcy (OSB) contributes to an efficient marketplace by maintaining the integrity of the Canadian insolvency system, thereby strengthening confidence in the Canadian economy. The OSB is responsible for the administration of the Bankruptcy and Insolvency Act (BIA), as well as certain aspects of the Companies’ Creditors Arrangement Act (CCAA). It licenses and regulates the insolvency profession, ensures an efficient and effective regulatory framework, supervises stakeholder compliance with the insolvency process, and maintains public records and statistics. In its oversight capacity, the OSB seeks to determine whether stakeholders (LITs, debtors, receivers, creditors, monitors) are fulfilling their obligations, as set out in the legislative and regulatory framework. To support this mandate, the OSB’s compliance program is comprised of the following three components: Compliance Promotion, Compliance Monitoring, Regulatory Investigation and Enforcement. The purpose of this document is to articulate the OSB’s approach to enhancing its compliance promotion activities, while recognizing that individual LITs are responsible for their own compliance (pursuant to the BIA and its General Rules, including the Code of Ethics for Trustees) and corporate LITs share responsibility for compliance of all of their employees and stakeholders. Compliance promotion includes any activity that increases awareness, informs, motivates or changes behaviour, and encourages voluntary compliance with regulatory requirements. As a key component of the OSB’s compliance program, compliance promotion provides a means for the OSB to increase stakeholder awareness and understanding of regulatory requirements and the consequences of non-compliance with the objective of increasing voluntary compliance and achieving more efficient compliance outcomes. 2.0 Objectives of Compliance Promotion The objective of the OSB compliance program is to instill stakeholder confidence in the insolvency system by ensuring high levels of compliance and appropriate consequences for non-compliance. Regulated parties are ultimately responsible for being aware of and complying with regulatory requirements. Effective compliance promotion activities help regulated parties to fulfil their responsibilities. Barriers to compliance can range from lack of understanding to difficulty complying with requirements that are nuanced or complex. Compliance promotion activities are a critical part of a robust compliance program. The objective of compliance promotion is to strengthen the OSB’s compliance program by encouraging voluntary compliance via a focus on: - Emphasizing the rights and responsibilities of stakeholders in the insolvency system. - Clarifying and communicating the OSB’s position and interpretation of the regulatory framework. - Reinforcing the consequences of non-compliance. Compliance promotion includes helping all stakeholders understand the shared responsibilities within the insolvency system. Stakeholders who are aware of the OSB’s interpretation of the regulatory framework can self-correct and may also report possible non-compliance through the OSB’s complaints process. Effective enforcement and the publication of enforcement cases will support general deterrence. 3.0 Principles of Compliance Promotion The OSB’s Compliance Promotion Framework is guided by the following principles: 3.1 Transparency The OSB will strive to communicate regulatory guidance and decisions, where appropriate, in a clear and concise manner that stakeholders can understand. The OSB will stand behind its guidance and decisions. This should allow stakeholders to arrange their affairs with the confidence that if they follow the guidance provided, they will be found to be in compliance with the regulatory framework. 3.2 Agility The OSB will strive to communicate its views in a timely and responsive manner, wherever possible. Communication from the OSB will adapt to the needs of stakeholders to support well-informed decisions and will seek to minimize the time that they are operating in an unclear regulatory environment. 3.3 Proportionality The OSB will strive to apply a risk-based approach and seek consequences that are appropriate to the risk posed, dealing with risks of similar severity with similar consequences to provide stakeholders with a better sense of what to expect in cases of non-compliance. 4.0 Compliance Promotion and the OSB Compliance Program There are three components to the OSB compliance program: compliance promotion, compliance monitoring and regulatory investigation, and enforcement. The proposed Compliance Promotion Framework seeks to enhance and support the two other components of the OSB’s compliance programs. Compliance Promotion and the OSB Compliance Program Compliance promotion enhances compliance monitoring and regulatory investigation by informing stakeholders of the OSB’s interpretation of the regulatory framework, providing insights on the OSB’s areas of concerns, and clarifying the standards the OSB is using to determine non-compliance. Compliance monitoring and regulatory investigation is focused on the targeted oversight of risk areas. Enforcement is focused on addressing instances of non-compliance and seeking appropriate consequences. Publishing non-compliance decisions, as appropriate, can provide stakeholders with information on the nature and consequences of non-compliance. 5.0 Compliance Promotion Tools The OSB’s compliance promotion tools can be divided in the following categories: - Proactive publication of guidance and decisions - Engagement with stakeholders - Publicizing enforcement decisions 5.1 Regulatory Guidance To communicate its expectations and interpretations of the regulatory framework, the OSB issues guidance and decisions to stakeholders. This may include: 5.1.1 Directives The BIA provides the Superintendent the authority to issue Directives to detail the administrative requirements of the Act. The OSB will continue to review, analyze and propose amendments to Directives to ensure they remain relevant and effective, provide clear and timely information and balance the interests of stakeholders in a way that helps protect the integrity of the insolvency system. 5.1.2 Guidance The OSB issues guidance to inform stakeholders of its interpretation of the regulatory framework. The OSB strives to ensure guidance is timely, relevant and easily searchable. 5.1.3 Decisions The OSB also highlights relevant administrative or court decisions which can further assist stakeholders in understanding the regulatory requirements and how the OSB is applying them. 5.2 Engagement with Stakeholders Engagement with stakeholders encompasses a two-way sharing of information. Presentations, meetings and discussions serve to support compliance by providing the OSB with timely information on issues which need to be addressed and clarified and by providing regulated parties with information on the elements necessary to comply. Engagement will include: 5.2.1 Participation in Presentations and Conferences The OSB will continue to participate in presentations to industry associations, speaking engagements with stakeholder groups, and to provide an opportunity for stakeholders to participate in OSB led presentations and conferences. 5.2.2 Engagement with Professional Associations The OSB is committed to continuing its engagement with relevant professional associations on issues of mutual interest. 5.2.3 Engagement with LITs The OSB will continue to engage directly with LITs through public consultations and other direct communications between OSB analysts and LITs. 5.2.4 Engagement with debtors and creditors The OSB will continue to seek opportunities to engage with and provide information to debtors and creditors. 5.3 Non-compliance Decisions The OSB will publish non-compliance decisions, as appropriate, in an effort to inform insolvency stakeholders about the consequences of contraventions which will encourage them to comply and provide them with information on what constitutes non-compliance. Publication of non-compliance decisions will also support public confidence in the integrity of the insolvency system in that there is a vigilant regulator which pursues serious cases. The information also contributes to the protection of consumers and the public by promoting awareness pertaining to the conduct of stakeholders and the expected standards of conduct. At the same time, the information allows stakeholders to assess whether their own respective conduct meets expectations and to take necessary steps to ensure compliance. Non-compliance decisions includes: 5.3.1 Conservatory Measures and Professional Conduct Investigations The OSB publishes the issuance of conservatory measures and the outcomes of professional conduct investigations on its website. 5.3.2 OSB Interventions The OSB will publish summaries of significant cases where the OSB intervened in an insolvency matter and the topic of the intervention may be of value to the insolvency profession in understanding what is expected to achieve compliance. 5.3.3 Criminal Cases The OSB will publish summaries of criminal cases following a guilty plea or sentencing to help stakeholders achieve a greater understanding of criminal non-compliance and to deter future behaviour of a similar nature. Conclusion The OSB’s approach to compliance promotion activities has been developed using best practices of other government departments and international regulatory organizations. Compliance promotion will play an important part in realizing the OSB compliance objectives. The OSB welcomes feedback from stakeholders on the Compliance Promotion Framework at: [email protected]. - Date modified:
http://www.strategis.ic.gc.ca/eic/site/bsf-osb.nsf/eng/br04612.html
Assessment Leads manage all aspects of an assessment. The Assessment Lead's primary role is to provide consultative leadership during the assessment phase of the sales cycle. They provide direction for the assessment team on specific engagements and act as a senior representative for assessment related activities. They manage tasks, deliverables and the outcome of the assessment such as functional requirements, Target Operating Models, costs and timelines. The Assessment Lead is responsible for overall management of the assessment. The Solution Engineering Assessment Lead is a commissionable sale management position responsible for supporting sales of modernization solutions. Responsible for leading a team during an assessment of a client's current system against the Modern Banking Platform. The assessment team consists of subject matter consultants with application, banking and process expertise. The Solution Engineering assessment team plays a primary role during the discover, requirements and selling phase of the buying cycle providing necessary data points required for scoping, pricing and contracting. * Provides leadership for Modern Banking Platform assessment projects, including monitoring and reporting progress, managing issues and risks and creating project documentation. * Based on assessment findings, will be creating scope document, migration approach, high level timeline and resource plans for delivery of the Modern Banking Platform. * Responsible for activities involved in selling products and/or services, developing new accounts and/or expanding existing accounts. * Identifies critical market segments through market research and competitor analyses and recommends sales strategies for improvement. * Executes sales policies and practices. Experience: * Experience in Project Management working on software application implementations in the financial industry with sound understanding of a typical project lifecycle * Initiative and ability to multi-task under tight deadlines * Flexibility to quickly adapt to changing environments * Results oriented with a high sense of urgency to meet client requirements * Requires strong business skills, industry knowledge, financial management and planning skills, long-term vision and executive presence * In-depth knowledge of products and services * General knowledge of financial and/or payment solutions technology including systems, applications and banking practices * Excellent skills in communicating ideas both verbally and in written form in a clear, concise and professional manner including presentations and facilitation * Ability to communicate effectively with all levels of management in an organized, professional manner * Skill in productivity, planning and workload management * Requires solid decision-making and problem-solving skills * Requires the ability to establish and maintain effective working relationships with all levels of management (internally/externally), employees, clients and public Launch your career - Create your profile now!Create your Profile Loading some great jobs for you...
https://www.banking.jobs/jobs/frg-technology-consulting/solution-assessment-lead/-new-york-ny/1590226341693896137
Court ruling on demand response breeds uncertainty Officials at PJM Interconnection are reviewing a Friday federal appeals court ruling that invalidated a 2011 Federal Energy Regulatory Commission order providing incentives for electricity users to consume less power, a practice known as demand response. FERC has no jurisdiction over demand response, the court said (Greenwire, May 23). The court ruling strikes a blow to the Obama administration’s energy efficiency efforts and injects a large degree of uncertainty into how the rapidly expanding demand-response industry will play a role in the nation’s electricity markets. The divided ruling by the U.S. Court of Appeals for the District of Columbia Circuit effectively said FERC overstepped its authority under the Federal Power Act in its Order 745, ruling that demand response is a function of retail electricity markets, which are governed by the states. The goal of Order 745, which was cheered by environmental groups, was to establish parity between demand-response providers — which pool reduced energy usage by condominiums, hospitals and universities — and retail electricity providers (Greenwire, March 16, 2011). Grid operators and power providers, represented by the Electric Power Supply Association, the American Public Power Association and others, challenged the FERC order, saying the commission was unlawfully wading into retail electricity markets when the Federal Power Act granted the agency jurisdiction solely over wholesale markets. Retail sales, they argued, are regulated by states. “PJM is still reviewing the order and its ramifications with respect to PJM’s energy, ancillary service and capacity markets,” the company said in a statement. “At this point, it’s business as usual for us,” Andrew Ott, PJM’s executive vice president for markets, said during a conference call Friday. “We can’t predict what the future will hold. FERC will have to make some clarifications as to what it means for our tariff. We’re still studying it,” he added. In its statement, PJM noted that the ruling is not in effect until parties have had time to ask for a rehearing by the court. PJM plans to “provide further information regarding this matter at the Markets and Reliability Committee meeting on May 29,” it said. In the D.C. Circuit’s 2-1 ruling, Judge Janice Rogers Brown wrote that FERC’s contention that retail markets affect wholesale rates and that, therefore, the commission has jurisdiction is unavailing. Under that premise, she wrote, FERC’s authority would be “almost limitless.” “The commission’s rationale, however, has no limiting principle,” wrote Brown, a Republican appointee. “Without boundaries, [FERC’s interpretation] could ostensibly authorize FERC to regulate any number of areas, including the steel, fuel, and labor markets.” She added in her opinion, “The commission’s authority must be cabined by something sturdier than creative characterizations.” Senior Judge Laurence Silberman, another Republican appointee, joined Brown’s majority opinion. Senior Judge Harry Edwards dissented. He noted that several conditions have to be met before demand-response providers are paid under the order. Those factors, he contended, largely fall on the retail, not the wholesale, market. “Focusing on the market in which the consumption would have occurred in the first instance,” wrote Edwards, a Democratic appointee, “one can conceive of [the order] as impermissibly falling on the retail side of the jurisdictional line.” Edwards also said there was enough ambiguity in the statute on the demand-response issue that FERC deserved deference from the court. Julien Dumoulin-Smith, an analyst at investment bank UBS, in a Friday note said the ruling “could have significant impacts on power markets.” “We see states taking over [demand response] regulations as cutting both ways. We see the potential for more generous compensation in jurisdictions encouraging participation, while those that have been opposed, implementing tighter rules,” Dumoulin-Smith wrote. Environmental groups that fought to maintain FERC’s policies said the decision was ominous and would likely limit the agency’s ability to act in the future. “Our higher-level concern is that the grid is evolving pretty quickly with new resources, smart grid technology, on-site power, and we’re concerned this decision could really tie FERC’s hands in adapting to a new grid,” said John Moore, a senior attorney for the Natural Resources Defense Council’s Sustainable FERC Project. “It’s a pretty strong decision that doesn’t give FERC any room to improve the rule.” Demand-response programs will now be directly tied to a patchwork of state programs that are “uneven,” making it more difficult for businesses like Wal-Mart Stores Inc. and other industrial companies — new and established — to sell their demand-response services into the grid, Moore added. The decision could also hurt the integration of renewables because demand response helps in supporting the incorporation of wind and solar onto the grid, he said. “If demand response is going to mean anything in wholesale markets, it’s going to require states to better integrate their retail programs,” Moore said, while adding that most states do not currently operate vibrant retail demand-response markets. Click here for the opinion.
https://governorswindenergycoalition.org/court-ruling-on-demand-response-breeds-uncertainty/
Demand Side Response (DSR) is an electrical power-related concept that offers potential benefits both to grid operators and medium to large scale business users. Read now what companies should consider when planning to participate in some types of DSR systems. The concept arises because electrical demand across a country can occasionally reach peaks far in excess of normal consumption levels. One solution could be to build extra generating capacity to accommodate such peaks – but building power stations that may only be used for a few hours a year does not make great economic sense. It would be better if major users organised their demand to reduce their load during peak periods, so the grid would not be challenged by spikes that it could not handle. However, for this to happen, users must be incentivised to participate in a DSR scheme, and they must be aware of when peaks in demand occur. Demand side response legislation European legislation is in place to help companies participate in DSR schemes. According to Article 2(20) of Directive (EU) 2019/944 of the European Parliament and of the Council of 5 June 2019 on common rules for the internal market for electricity ‘demand response’ means 'the change of electricity load by final customers from their normal or current consumption patterns in response to market signals, including in response to time-variable electricity prices or incentive payments, or in response to the acceptance of the final customer's bid to sell demand reduction or increase at a price in organised markets as defined in point (4) of Article 2 of Commission Implementing Regulation (EU) No 1348/2014, whether alone or through aggregation’. Legal framework for the demand side participation in the EU Internal Electricity Market is mainly shaped by: - the Network Code on Demand Connection (DCC) and - the Electricity Balancing Network Code (NC EB, EBGL) Many types of DSR schemes are available – examples include grid balancing, capacity market, and peak avoidance. However, while any scheme that addresses the threat of unsustainable peaks is attractive to users and grid operators alike, DSR schemes also contribute to a wider need for more insightful grid management. Research & Development Smart grid status and outlook There is a steady and inevitable progression away from polluting coal-fired power generation; the UK government, for example, will ban coal-based generation from 2025. Yet coal-fired power stations, while polluting, are at least stable and predictable. They can be relied upon to deliver electricity on demand, unlike renewable resources, which are intermittent. This is highlighted as the energy market moves from major infrastructures that depend on a few large power stations to more decentralised schemes that rely more on renewable energy, and ‘embedded generation’, where consumers have their own on-site renewable energy resources. A more complex energy market As these factors cause the energy market to become more complex, it is imperative that the grid can be effectively managed and monitored. This requires state of the art digital technology to be implemented across all areas of the electricity system, from generation to transmission, distribution, supply and demand. Increasing grid controllability and visibility also creates revenue generation opportunities for users with suitable resources. Feed-in tariff schemes that allow sites with solar power or wind turbine renewable energy installations to sell energy to the grid are well established; DSR schemes are now available that allow such ‘user-generators’ to hook up with aggregators who manage load connections to the grid on their behalf. The aggregators pay for the energy they collect, as they use it to support the grid and maintain its frequency. Yet these demand-side energy resources do not have to be limited to renewable energy generators. Data centres and many other business premises have UPS installations that can be rated to hundreds of kWs or even MWs. Normally, power flows from the grid mains supply, and into a UPS through its rectifier, whose DC output is used to charge the UPS battery and feed the critical load via the UPS inverter. If the mains fails or exceeds acceptable parameter limits, the load is switched to the UPS batteries, which supply backup power until either the mains supply is restored, a local generator starts up, or the load can be shut down safely. THE FUTURE OF POWER ELECTRONICS Power device development trends – from Silicon to Wide Bandgap? UPSs as grid energy resources However some UPS manufacturers have significantly adapted their systems by introducing bi-directional rectifiers. These can draw power from the battery set and convert it back to AC to feed into the local grid supply. During this operation, the inverter continues to support the load. This potential for revenue generation is becoming safer and more attractive for UPS operators as lithium-ion batteries start to replace lead-acid VRLA types. Lithium-ion batteries can be recharged and made ready for use again much more quickly, while also offering many times more discharge/recharge cycles. Irrespective of whether VRLA or lithium-ion batteries are being used, though, some risk always remains. Any discharge of battery-stored energy will require recharge and recovery time – yet the battery must always be left with enough capacity to handle power outages as normally expected. Additionally, in mission-critical data centres that have an Uptime Institute Tier Rating for resilience and redundancy levels, allowing a UPS to participate in a DSR program may compromise the facility’s tier rating certification. Energy Storage Energy storage in 2020: Where is the industry headed? Conclusion Using UPS batteries in a DSR scheme allows a demonstrably greener footprint while also generating revenue – but is the possible increase of risk to the load worth it? The answer depends on each individual site’s circumstances, particularly factors such as the reliability of the electricity mains supply, and the criticality of the load.
https://www.power-and-beyond.com/the-benefits-of-demand-side-response-a-e53afb4a78a73de93d9e997fb9f17114/
Laboratory for Manufacturing Systems and Automation, Department of Mechanical Engineering and Aeronautics, University of Patras, University Campus, Rio Patras 26504, Greece. Correspondence to: Prof./Dr. Dimitris Mourtzis, Laboratory for Manufacturing Systems and Automation, Department of Mechanical Engineering and Aeronautics, University of Patras, University Campus, Rio Patras 26504, Greece. E-mail: [email protected] © The Author(s) 2022. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Aim: This work explores the opportunities and challenges associated with the integration of Product Service Systems (PSS) into Smart Grids (SGs) with the aim of improving manufacturing and production Methods: A bibliometric analysis was conducted to examine the existing bibliographic material and identify the primary scientific directions for this research area. The aim was to provide a thorough understanding of the research problems and perform an in-depth analysis of the distinctive aspects of development of the scientific research in this field and visualize the results with VOS viewer tool. Results: Industry, society, and academia are being reshaped and reorganized to integrate new information and communications technology (ICT) to enable complete digitization and digitalization of their infrastructures. This trend is driven by the need for more resilient, environmentally sustainable, and human-centric systems, as indicated by the terms Industry 5.0 and Society 5.0. Electrical power generation and distribution currently are critical issues for society. In this context, SGs are examined from the perspective of product-service systems (PSSs). The key enabling technologies are discussed and a technical discussion based on presentation of implemented frameworks follows. Conclusion: Intelligent societies will become the new reality. It is necessary to adapt existing infrastructures toward human centricity, resilience, and sustainability. SGs require further development, with several challenges still to be addressed. However, engineers have several technologies available, including blockchain and PSSs, to promote client activity in societal infrastructures. Energy 5.0, smart grid, product-service system, PSS, Society 5.0, sustainability Climate change has been an open challenge in recent decades, creating pressure on governments, companies, and the international community to adopt cleaner energy strategies and improve the energy efficiency of their systems and processes/operations. Production and manufacturing are among the most energy-intensive activities in modern society and by extension, their energy consumption has been subjected to extensive research over the last few years because of increasing public awareness of key environmental issues, including the greenhouse effect and global warming, the strict legislation about permitted emissions, and rising energy costs. As a result, enterprises are moving toward efficient manufacturing. Furthermore, energy efficiency has been recognized on a global scale as a major policy priority to achieve a carbon-neutral society. More specifically, energy efficiency represents one of the cornerstones for several initiatives, including the EU energy and climate policy, the United States policy called “Getting to Zero: A U.S. Climate Agenda”, and China’s path toward a carbon-neutral society by 2060. In light of the recent advances in Industry 4.0, the upcoming Industry 5.0, and Society 5.0, engineers are now focusing on the design, development, and implementation of strategies to enhance their productivity and remain competitive while also ensuring that the environmental impact of their activities remains as low as possible. Among the possible solutions for energy and waste management, business models have also been transformed by following the servitization paradigm to provide models such as product-service systems (PSSs) and industrial product-service systems (IPSSs), which promote selling of services rather than tangible goods. According to recent reports, an increase in global electricity demand has been noted that is estimated to hit a 40% increase by 2040. From the latest European Union (EU) market analysis, it has become evident that under current circumstances, the costs of electric energy production and distribution are now a critical issue. Consequently, energy suppliers are faced with the challenge of producing even more electrical energy to meet market demand, compensate for the production cost of this electrical power, and follow new environmental initiatives for greener and more sustainable power production as well. The solution to this challenge lies in the design and development of frameworks to support power generation from renewable and distributed energy resources, which by extension should also be integrated into the outdated, inflexible, and overstressed centralized electrical grids. Essentially, it has become apparent from the most pertinent literature that energy distribution, beyond the existing environmental challenges and the concerns raised by communities, should be democratized. Consequently, energy distribution in modern SGs is accompanied by a constant exchange of information between the energy suppliers and energy consumers. Therefore, with the proposal and integration of PSSs in SGs, new opportunities for creation of suitable communication channels between the clients and the producers will be established. The resulting communication among the stakeholders will allow the volatility of demand, which remains an ongoing challenge, to be tackled more efficiently. Furthermore, with constant tracking of energy demand vs. energy production vs. the production resources (e.g., fossil fuels, renewable sources), energy producers will be more capable of utilizing the potential of their renewable sources fully while minimizing the environmental footprint of their operations. Another important challenge is addressing the stability of the networks and preventing power outages, which in many cases are related to increased and uncontrolled demand from the clients. Similarly, the decentralization model of the SG offers additional benefits to society as a whole since new opportunities can be created for energy companies, while their competitiveness is maintained. In addition, this model means that energy consumers can also play a more active role in energy production. More specifically, rather than the classic consumption model, consumers can contribute to the SG by generating electrical power (via utilization of individual devices) that can be stored and then delivered to meet demand in peak energy consumption periods. Therefore, by using digital technologies such as blockchain, distributed ledgers can be implemented to maintain a complete record of the energy transactions, set up virtual contracts between clients and producers, and provide a space for implementation of a client reward system. The generation of more efficient production schedules that focus on rebalancing energy supply and demand, driven by the ever-increasing need to reduce costs, is thus an imperative. Ultimately, in an attempt to address the challenges mentioned and the literature gaps identified in the preceding paragraphs, this work presents and discusses the latest developments in the field of PSSs (including IPSSs) and the servitization of energy distribution that takes advantage of the cutting-edge digital technologies that were introduced in the Industry 4.0 framework. The remainder of this work is structured as follows. In Section 2, the most pertinent and relevant literature in the SG field and the accompanying technologies and techniques are investigated. Then, in Section 3, technical details are provided through presentation of frameworks that were mainly developed in the industrial domain but are expandable to the SG concept. In Section 4 the key research challenges for the near future are summarized, possible solutions are discussed, and a conceptual solution framework is presented. Finally, in Section 5, conclusions about the research are presented, and aspects of future work are discussed. A bibliometric analysis was conducted to examine the existing bibliographic material and identify the primary scientific directions for this research area. The aim was to provide a thorough understanding of the research problems and perform an in-depth analysis of the distinctive aspects of development of the scientific research in this field. The sequence for the bibliometric analysis is presented in Table 1. Review methodology |Database||Scopus| |Article type||Scientific articles published in peer-reviewed journals| |Search query (TITLE)||(“Smart Grid”) and (“product service systems” or “PSS”) and (“energy”)| |Time frame||2012-2022| |Identification of publication type||Journal articles only; conference papers, books, and book chapters| |Language||English| |Choice of the field of publication||Engineering and manufacturing relevant domains| |Screening & paper selection procedure||Full paper available; article in English; article in the manufacturing domain; article related to maintenance| |Results||181| The initial search returned a total of 181 scientific literature articles. These articles comprised 51 journal articles, 92 conference papers, 10 book chapters, two books, and 26 conference reviews. In addition, with regard to the relevant topics, the majority of these publications fall into the categories of engineering, energy, computer science, mathematics, and environmental science. Next, the results dataset was converted into the comma-separated values (CSV) format for further processing. VOSviewer software was used in an effort to visualize the results and analyze their bibliometric form as presented in Figure 1. Specifically, VOSviewer provides the functionality required to create a keyword map based on shared networks, and it can thus create maps with multiple items, along with publication maps, country maps, journal maps based on networks (co-citations), and maps with multiple publications. Less relevant keywords can be removed, and the number of keywords that is used can be adapted by the users. In summary, the functionalities of the VOSviewer software extend to the support of data mining, mapping, and grouping of articles retrieved from scientific databases. Topic mapping is essential for bibliometric research. Technological advancements and the adoption of the Internet of Things (IoT) in business have sparked the Industrial Revolution 4.0 and the rise of Energy 4.0. This section discusses the advantages and the difficulties that the digital revolution has brought to the energy sector. Energy usage contributes significantly to global emissions and thus to climate change. Additionally, the associated energy costs are rising globally. As a result, there is growing pressure to address both the volume and the type of energy consumption across all sectors by implementing new and creative solutions. New electricity sources such as wind and solar energy have now been demonstrated to be mature and dependable technologies, but there are still related difficulties. The next step will be to tailor these new forms of energy generation to the users’ consumption patterns. Because industrial users account for more than 40% of the global total energy consumption, there is a significant opportunity to increase energy efficiency by taking advantage of important trends in industry. In Figure 2, the correlation between industrial revolutions and energy is illustrated. More specifically, when mechanical production began to replace manual labor in the late 18th century, the first major industrial revolution occurred. Chronologically, the second industrial revolution took place a century later, following the widespread electrification of industrial processes. Subsequently, electrical grids began to be developed on a global scale. In the third industrial revolution, which began in the middle of the 20th century, process automation and computers were introduced to enable further optimization of the production process. The fourth industrial revolution, which is also known as Industry 4.0, is currently using new smart and connected systems to boost the flexibility and overall productivity of industry. By extension, the interconnectedness of machinery, larger systems, and devices both within and between industrial sites and users has led to increased manufacturing intelligence. The sustainable energy transition and Industry 4.0 thus share important characteristics that can be interconnected to pursue a sustainable energy transition toward the realization of Industry 5.0. Figure 2. Key developments in energy in parallel with industrial revolutions. Some fundamental guidelines for incorporation of Industry 4.0 tools that will enable Internet connectivity and usage in operational and industrial processes to aid in the implementation of Energy 4.0 are summarized in the following points: ● Interoperability: Through use of the Internet and its services, interoperability represents the connection of various components and human resources. ● Virtualization: Information from sensors, simulation models, back-office systems, and other resources can be made into virtual copies. ● Real-time capability: The capacity to gather data, conduct analysis, and reach decisions immediately with near-zero latency. ● Modularity: The ability to replace, add, or remove components as needed. Consequently, through the implementation of Industry 4.0 and Energy 4.0, engineers have laid the foundations for the realization of Energy 5.0. More precisely, developments falling within the framework of Energy 4.0 have also been defined in the literature as Smart Grid 1.0 (i.e., the first-generation SG). Similarly, the imminent changes and advances that are being made towards Energy 5.0 are defined as the Energy 4.0 is a concept that was introduced within the framework of Industry 4.0, which refers to the digitization of the energy sector. A detailed definition of Energy 4.0 is necessary here, because Energy 5.0 is heavily correlated with this concept as it essentially represents the next stage in its evolution. It should be stressed here that in the energy sector, key areas including energy generation, distribution, storage, and marketing, among other aspects, are all included. The reasons behind these changes include the fact that the physical world is changing at an unprecedented speed, with significant issues to be addressed in intermittent renewables, nuclear power, and new transmission and distribution grids, among other areas. Additionally, the commercial energy world is changing (e.g., unbundling, trading, and new products). Finally, another significant reason is the constantly growing collection and flow of big data sets. Cyber-physical systems (CPSs) are essential elements of Energy 4.0. CPSs are composed of physical entities and are controlled or monitored using computer-based algorithms. The energy industry can also be considered to be one huge and highly complex CPS. As a result, the energy industry is likely to be seriously affected by cutting-edge Industry 4.0 technologies. Figure 3 illustrates the concept of the CPS in the energy sector as the convergence of energy law frameworks and information and communications technology (ICT) law frameworks, as presented by the global community during recent years. Figure 3. Energy 4.0 business models as a basis for development of the Energy 5.0 business model. AbLaV: Load management, interruptible loads; BNetzA: German Federal Network Agency for Electricity, Gas, Telecommunications, Postal Services and Railways; EnWG: ESA (European Space Agency)-NASA (National Aerronautics and Space Administration) working group; IT: information technology; MiFiD: markets in financial instruments directive (2004/39/EC); REMIT: regulation on wholesale energy market integrity and transparency. A plethora of new technologies has been developed as a result of the digital revolution occurring in the electricity sector, along with exponential increases in both data processing and storage capacity. Many advantages have accompanied this change and are summarized in the following: 1. Electricity utilities have further aided in addressing the grid instability and imbalance issues that have been partially exacerbated by the introduction of intermittent renewable energy sources. Widespread adoption of pre-emptive processes and much faster corrective actions have been made possible by implementation of real-time data monitoring. 2. The related interoperability of the various asset types, including renewable generation resources, energy storage facilities, and flexible loads, has been essential to this digital transformation. 3. The detection of process inefficiencies and equipment malfunctions at industrial sites has also been made possible using these data and monitoring-based approaches. Changes in business practices and replacement of outdated technologies with newer, more effective models are only two components of the solution. Artificial intelligence (AI)-based software applications with higher levels of sophistication can also be used actively to optimize energy flows. 4. Reductions in energy consumption ranging from 13% up to 29% have been enabled by use of new technologies and waste reduction. This has resulted in a remarkable 4% reduction in total global CO2 emissions. 5. Placement of major technological innovations is helping to improve the efficiency and sustainability of the energy sector. 6. Increased flexibility has been realized in operational procedures. 7. Increased levels of personalization have been introduced into the services in an attempt to meet the requirements of customers. 8. The capability to obtain real-time, accurate information has been realized. 9. Accurate and thorough monitoring of the entire supply chain, including generation, transmission, distribution, and commercialization, has been enabled. 10. Process automation has improved the operational effectiveness of businesses. 11. Real-time supply and demand adjustments can aid in reducing the number of inefficient operations. Developed countries have reliable electrical infrastructures and minimal growth rates that allow them to focus on grid standardization, smart meter implementation technology development, and the interoperability of grid-connected and distributed renewable energy generation. The business model for Energy 4.0 offers various advantages, but there are also several challenges that must be addressed to enable a successful transition toward a more sustainable and human-centric Energy 5.0 business model. Therefore, the key challenges and issues that must be addressed by both the developed and developing countries to realize the full range of benefits of SG implementation are discussed hereafter: 1. Technology Development: Over the past decades, ICT has advanced significantly. However, to make the grid smarter, a brand-new communication infrastructure that is highly reliable and attack-resistant must be constructed, either separately from or integrated into the existing World Wide Web. Advanced sensor systems will be created and implemented in both smart buildings and the grid to measure phases, collect consumer consumption data, control automatic circuit breakers to ensure minimal disruption, and perform peak shaving of electrical appliances. It is thus necessary to develop and implement cutting-edge components, including smart appliances, smart meters, effective energy storage devices, high voltage DC transmission devices, and flexible AC transmission system (FACTS) devices. 2. Quality Power to All Households: In the coming years, significant expansion of the power system network will be required to ensure reliable supply of electrical energy to all households. To realize the vision of the SG, the quality of the supply must also be guaranteed. Therefore, to ensure that a high-quality supply to all households is maintained, the current grid will have to be upgraded and expanded. In addition, to reduce the supply gap during peak hours and peak energy costs, distributed renewable energy generation and the ability to save money by shifting loads from peak periods to off-peak periods should also be encouraged[27,28]. 3. Reduction of Transmission and Distribution Loss (T&D): T&D losses will be minimized to meet international standards. Technical losses caused by a weak grid, financial losses, and a decline in collection efficiency are the main factors that influence the T&D loss. 4. Interoperability and Cyber Security: An advanced metering infrastructure (AMI) and SG end-to-end security, a revenue metering information model, building automation, inter-control center communications, substation automation and protection, application-level energy management system interfaces, information security for power system control operations, and phasor measurement unit (PMU) communications are among the interoperability standards that have been created by the National Institute of Standards and Technology in the United States (including intelligent electronic devices or IEDs). 5. Consumer Support: The lack of consumer awareness of problems in the power sector is one of the main obstacles to the implementation of the SG. Therefore, to reduce peak load consumption and encourage distributed renewable energy generation, consumer support for SG implementation will be essential. Intelligent grid implementation will raise the quality and consistency of the power supply. It will also ensure that utility customers will have an easy-to-use and transparent interface, additional options, including green power, and the ability to save money by shifting their loads from peak times to off-peak hours. Nevertheless, to benefit from the SG on both individual and national levels, consumers must be aware of new technologies and support their utilities. Industrial Internet of Things (IIoT) sensors can be used by electric companies to collect behavioral data about their assets. Machine learning (ML) algorithms and big data techniques can then be used to analyze this information along with the data acquired from the rest of the power network to predict problems and assist operations managers in determining when to maintain or replace a network asset. Knowing when to perform equipment maintenance increases the equipment’s lifespan and reduces the number of truck rolls, the numbers of field personnel deployed, and the material stock, thus contributing to financial savings. Electric companies are familiar with use of sensing equipment to monitor their assets and have used sensors in their operations for many years. These sensors are used to monitor a variety of parameters, including load, voltage, phase, temperature, and oil viscosity, and they also give Supervisory Control and Data Acquisition (SCADA) system operators advance notice of equipment failure. However, there are two main distinctions between the IIoT devices suggested by Industry 4.0 and the existing sensing and actuating devices that are present in the power grid. First, IIoT devices are much simpler to deploy in larger quantities in any piece of equipment or at any point in the network because of their smaller sizes, lower power requirements, and lower costs when compared with the current devices. IIoT sensors represent the best way to gather the data required for predictive maintenance plans for older assets and for areas in the network that were not previously monitored because most legacy equipment does not contain embedded sensing devices. The second distinction is that the data from the sensors are now delivered using the Internet protocol rather than through private area networks (PANs) or local area networks (LANs). This quality is essential for rapid and economical deployment of these devices across the entire power grid. It has been predicted that the investment required to upgrade an existing communication network will be at least 60% of its initial cost to compensate for the number of sensors and the amount of data required for an application on this scale. By relying on Internet service providers (ISPs) for management of the communications and utilities, the enormous costs associated with the development, upgrading, and maintenance of private communication networks are shifted to an outside company who have a core competency of communications and can thus provide better service at lower cost[34,35]. Therefore, when moving onto predictive maintenance plans, electricity providers have the best available options because of emerging technologies such as IIoT and ML. A typical conceptual framework for enabling predictive maintenance in the electric industry via use of Industry 4.0 technologies is depicted in Figure 4. The 2030 Agenda, which is the main document intended to direct global efforts in sustainable development until 2030, was adopted at the UN Sustainable Development Summit in September 2015. The agenda lists 169 additional specific targets in essential development areas, including poverty, water, energy, education, gender equality, economy, biodiversity, climate action, and many others, in addition to 17 goals that are known as the Sustainable Development Goals [Table 2]. More specifically, with regard to SG development, the following SDGs, and 7, 9, and 13 in particular, are relevant. Relevance of SDGs for sustainable energy and digital industrial development |SDG||Description| |SDG 7. Affordable & clean energy||SDG 7 encourages the use of clean, affordable energy. By 2030, it wants to make sure that everyone has access to modern, sustainable, affordable energy. This entails significantly raising the proportion of renewable energy sources in the world’s energy mix as well as doubling the rate of energy efficiency growth everywhere. With a focus on least developed nations, small island developing states, and landlocked developing nations, SDG 7 specifically aims to develop infrastructure and sustainable energy services for all in developing countries| |SDG 9. Industry innovation & infrastructure||SDG 9 is concerned with business, innovation, and infrastructure. It aims to create robust infrastructures, advance inclusive and sustainable industrialization, and encourage innovation. The promotion of inclusive and sustainable industrialization, as well as increasing the share of manufacturing employment and the proportion of manufacturing value added to the gross domestic product, are specifically highlighted as goals under SDG 9. A specific goal of SDG 9 is to increase access to Information Communication Technologies (ICTs) and provide universal, accessible, and affordable Internet access to the least developed nations by 2020| |SDG 13. Climate action||SDG 13 is focused on addressing climate change, including stepping up efforts to both mitigate it and adapt to its effects. The Paris Agreement, which went into effect on 4 November 2016 and represents a significant turning point for international efforts to mitigate and adapt to climate change, is closely related to the implementation of SDG 13| The introduction of the SG signals the beginning of a new era of energy industry dependability, availability, and efficiency, which will be beneficial for both the economy and the environment. Several benefits will follow the creation and implementation of the SG, and one of these benefits is more effective electrical power distribution, which is made possible by using algorithmic approaches that consider current and future demand along with energy production and consumption. Energy providers will be better able to track and predict grid malfunctions and act rapidly as a result of their close monitoring of the grid components, thus minimizing power disruptions (e.g., power outages). The cost of power for the consumers can then be reduced as a result of the reduced numbers of operations and management expenses required for the utilities. Additionally, better energy management and distribution among the grid users can reduce peak demand, thus enabling energy providers to lower electricity prices further. In addition to the benefits listed above, the SG encourages integration of extensive renewable energy systems (e.g., solar, wind, and hydrogen resources). When the IoT is integrated, it becomes essential to integrate renewable energy systems for two reasons. Distributed power generation comes first (i.e., decentralized power generation). Second, because customers contribute actively to the development of power consumption plans, they are more effectively integrated into and involved in the power distribution process. Finally, one critical issue that will require careful design and implementation is security against cyberattacks. To manage cyberattacks against safety-critical systems, engineers can develop and implement security frameworks with the aid of an SG. Based on the authors’ recommendations in reference, the DO-178B aviation standard may be helpful. Europe is the region that has the highest energy consumption among manufacturers, accounting for approximately 25% of global energy consumption. This category includes two different business types: small and medium-sized enterprises (SMEs) and large enterprises, with SMEs making up 99.8% of all businesses in Europe. To satisfy customer demands and requirements while also increasing product value, manufacturers have recently been moving away from pure manufacturing via mass production toward a more flexible and personalized production approach for their customers. The evolution of manufacturer servitization is strongly correlated with the IPSS model, which is a hybrid dynamic system that combines the physical products and services of a single company. The following two factors have had an impact on successful adoption and use of IPSSs: 1. The long-term relationship between the supplier and the customers, which is essential because the services rely on two-way interactions between the customers and the supplier; and 2. the ICT that will support appropriate use of the available services. With a potential market size of $15 billion by 2023, the Digital twin (DT) was first used by Grieves (2015). The DT has been rated strategically as one of the top ten technologies of 2018 (subject to predictions on future research trends). The DT, as one of the technological pillars of Industry 4.0, is a virtual representation of a valuable or physical asset, e.g., a service, product, or machine, with models that can alter behavior using real-time data and analytics supported by visualization tools and human-machine interfaces linked to the condition of the monitored object (e.g., a machine)[45,46]. With regard to the existing research, the authors in created a DT for real-time analysis of complex energy systems by simulating the grid states and then assessing the grid’s effectiveness using ML algorithms. A DT-based platform for smart city energy management has also been presented in the literature. To benchmark a building’s energy efficiency, smart meters are used to gather asset data that is then fed into DT virtual building models. Frameworks for energy demand management based on DTs have also been presented and discussed in the literature[49,50]. The literature investigation indicates that Industry 4.0 has greatly improved advanced simulation technologies and techniques such as DTs. Additionally, DTs are being incorporated into power distribution grids, which will make it easier to manage and regulate the energy supply network. Modern society and the economy are dependent on energy. Much of the current industrial sustainability agenda is defined using eco-initiatives, eco-innovations, and eco-efficiency. Decarbonization, increased decentralization, and increased digitalization of energy systems are driving the current rapid changes in the energy sector. Larger consumers such as manufacturing firms are also involved in this effort, which is intended to create more sustainable energy production and consumption systems through implementation of service-oriented business models. Energy providers are altering their IPSS business models to add value to their offerings and increase their competitiveness and sustainability. IPSSs are business models for reliable product and service delivery that allow for cooperative product and service deployment and consumption. However, PSSs have now been integrated into a wide range of scientific disciplines, including business management, ICT, and manufacturing. Energy companies use cloud systems, IoT, and big data analytics to combine their centralized and distributed energy systems into a more complex system. To maintain the high fidelity of constant energy supply while ensuring that the electricity supply remains competitive and affordable, energy sales companies (ESCs) must also become more digitalized. The manufacturing sector (demand) and the energy sector (supply) must therefore work together more closely. Additionally, creative approaches will be required to adapt the electricity market to distributed energy generation while also enabling the industrial sector to switch to energy-efficient manufacturing. The main goals are to place the consumers at the center of the energy system and to ensure that they can then take advantage of the cutting-edge energy services that are available. Business models address how an organization defines its competitive strategy through the design of the goods or services that it provides to its market, how it sets its prices, how much it costs to produce its goods or services, how it sets itself apart from rival organizations through its value proposition, and how it connects its value chain to those of other organizations to form a value network. The history of PSSs, the current state-of-the-art, and potential directions for future PSS research were all presented well by Meier et al.. The authors also stated that the market proposition, customer requirements, and environmental impact are major defining factors. Reduced environmental impact, differentiation, attainment of competence, and production efficiency are some of the main advantages of PSS adoption. The PSS value proposition strategy is regarded as one of the innovations that will be required to advance society toward more sustainable futures as a result of extensive research. There is growing interest in application of manufacturing scheduling as a way to reduce energy costs. One important but difficult situation is the case of scheduling of an industrial facility that is subject to real-time electricity pricing. The manufacturing sector faces new challenges that these contemporary products and systems are unable to address successfully. Energy-efficiency techniques and services that regulate electricity demand and optimize power consumption have thus been proposed to meet these challenges. To change the amount and/or timing of the energy consumption, industrial energy demand management (EDM) involves systematic actions being taken at the interface between the ESC and the industrial consumer. On the industrial consumer’s side, the EDM activities include responses to energy price signals and production adjustments that result in energy demand flexibility (EDF). The advantages of EDF include lower power consumption costs and greater room for intermittent renewable energy sources. Energy demand response (EDR) represents shifting of energy consumption to a different point in time or to different resources, in contrast to energy efficiency, which is intended to reduce overall energy consumption. Explicit and implicit schemes are the two main complementary approaches to EDR. In the first scheme, customers are rewarded specifically for their flexibility (e.g., free consultancy). An actual energy cost reduction is provided in the second scenario. Effective scheduling tools are essential for industrial EDM, where complex manufacturing processes are involved, because of the strong correlation between energy availability and energy price. Planning and running energy-efficient and energy-demand-flexible production systems necessitates in-depth understanding of the energy consumption behavior of the system components, the energy consumption of the production processes, and techniques to evaluate system design alternatives. In addition, a personalized-real time pricing (P-RTP) system architecture structure has been proposed. Only users who initiate the P-RTP would receive an equal distribution of the energy cost savings. As a result, the proposed system reduces energy costs significantly without sacrificing the welfare of the electricity users. It is concluded that all the approaches mentioned above have a common goal: the generation of flexible, adaptable, and practical production scheduling. The electricity grid typically provides the energy required to run industrial machinery. However, the ways in which the machinery uses materials and energy can be wasteful. Studies have therefore been conducted on energy-supply-oriented production planning. Similarly, Biel and Glock presented a literature review of decision support models for application to energy-efficient production planning and described how taking energy consumption into account during production planning can lead to more energy-efficient production processes. It is evident from the available literature that integrated approaches would be advantageous for all parties involved. Conventional electrical grids are based on functional integration of the energy producers and consumers. However, the new characteristics of SGs offer the possibility of development of sustainable, economical, and efficient energy supplies to the customers. This model thus encourages the consumers to engage in the grid’s operation and management, and to contribute to the energy distribution process by producing, selling, or sharing through the grid. This means that they represent important components for the grid’s functionality and transform into “prosumers”, who optimize their economic-energy decisions based on their individual energy requirements. A prosumer is an energy user who produces energy from renewable sources such as photovoltaic arrays or wind turbines, rather than counting on the power plant’s supply alone, and shares this energy with the grid’s other consumers. Therefore, a prosumer can be recognized as a stakeholder who uses electrical power and also contributes to the grid by generating power at a certain point in time. The grid follows the bidirectional data flow and energy flows between the stakeholders, analysis of which may provide important information for the electrical grid function and for energy distribution optimization. Overall, prosumers, smart information, bidirectional communications, and advanced analytics are regarded as the basic components of the SG. The main characteristics of the prosumer’s profile can be summarized as (i) energy production; (ii) energy storage and sharing; (iii) energy consumption; and (iv) peer-to-peer transactions. However, prosumers are differentiated from consumers because they are considered to be an advanced version of the latter and provide significant advantages for both their energy management and the entire grid. Table 3 highlights the main differences between the two energy client profiles described above. Consumer vs. prosumer energy profiles |Consumer||Prosumer| |Consumes energy||Consumes, produces, shares, stores, and sells energy| |Has limited access to static data||Has access to real-time data| |Deploys non-renewable energy sources||Deploys renewable energy sources| |Physical presence is required for device utilization||Remote utilization of applications and services| |Increased vulnerability to grid malfunctions and instabilities||Safer against grid malfunctions and instabilities| |Limited to non-existent electricity cost management||Increased insight in energy patterns and support for self-management of consumption| |Limited to non-existent environmental consciousness||Increased environmental consciousness| In the literature, two generations of SGs have been proposed. Both generations share the same goals, i.e., improving power distribution architectures to support real-time operation and thus achieve greater resilience and adaptability for an SG within a smart city environment. However, the two generations are differentiated by the fact that Smart Grid 1.0 mainly focused on technological evolution of the existing infrastructure based on the recent technological advances from both Industry 4.0 and digital technologies, including ICT. In contrast, Smart Grid 2.0 is mainly focused on the involvement of the customers in a variety of SG operations, including power generation, storage, distribution, and marketing. Therefore, as part of the framework for Smart Grid 2.0, the use of a peer-to-peer (P2P) architecture is imperative. Furthermore, Smart Grid 2.0 also focuses on distribution automation (DA) and an AMI. The DA can provide a self-healing, digitally controlled network to ensure reliable electric power delivery. The concept also encompasses demand response, smart home automation, distributed generation, distributed storage, and automated control. It is stressed that Smart Grid 2.0 is regarded as the energy Internet (EI) from an information technology perspective. Although the current electrical grid is being transformed into an SG, open energy or EI is rapidly gaining popularity. An EI is an example of this trend, which is also being referred to as the Smart Grid 2.0 era. Consequently, the role of the intermediaries is eliminated. Furthermore, the second generation of SGs supports improved data acquisition and cybersecurity mechanisms to ease the development of more robust and precise decision-making tools. In Table 4, the key differences between Smart Grid 1.0 and Smart Grid 2.0 have been compiled and categorized based on their domain of interest. |Domain||Smart Grid 1.0||Smart Grid 2.0| |Energy production||Centralized or distributed production including renewable energy and battery storage| |Energy transmission||Routing to geographically diverse locations||Routing to geographically diverse locations, integrated with asset management and fault prognosis and recovery| |Energy distribution||Limited types of energy resources at the distribution level, including prosumer energy production and storage||Integration of heterogeneous energy sources, increased grid resilience and efficient asset management for distribution networks| |Energy consumption||Self-management of energy consumption patterns utilizing smart meter data||Autonomous demand response initiatives for sustainable energy consumption patterns| |Marketing||Intermediaries heavily involved||Self-energy generation and peer-2-peer trading are promoted (democratization of energy)| |Operations||Energy trading and information flow between the providers and the customers||Energy trading and real-time information exchange over the Internet| |Information transfer||Domains are connected to power and communication networks, whereas market and operations operate via two-way communication channels||The Multi-layer architecture enables peer-2-peer communication over the Internet Protocol| Among the challenges listed in the Introduction, it has also been proposed that the integration of blockchain technology, because of its advantages and its development during Industry 4.0, will enable engineers to provide additional functionalities in the SG. Therefore, in this Section, the challenges and opportunities of blockchain technology are discussed, along with the steps required for implementation. Briefly, blockchain is based on the creation and constant updating of a distributed ledger following a common consensus policy. Essentially, the adoption of such a technology will make it easier for multiple stakeholders within a network (independently of the network’s size) to maintain a common track of the exchanges that take place within the network. The process above can easily be paralleled and implemented within an SG [Figure 5]. In the work of Dehalwar et al., the authors proposed a methodology for integration of distributed ledger technologies and techniques in an attempt to improve the management of an SG by building a trust management policy. Similarly, Guo et al. also investigated the topic of blockchain within the SG environment. Among the key findings of this literature review, the authors stated that this technology will enable additional managerial functionalities by focusing on the decentralization of SG management, and it may be extended beyond infrastructures to be used in electric vehicles as well. Figure 4 presents a generalized roadmap for a blockchain SG. One of the most important aspects of blockchain technology is smart contracts, which can enable energy providers to encode market rules and by extension to ease automation of the pricing process at the individual prosumer and microgeneration levels. As a result, with the adoption of a smart contract policy, complex power distribution networks and their modules can be segmented with respect to direct participants to promote independent operation while also maintaining supervision and coordination without the need for third-party involvement. The SG concept is one of the most challenging aspects of the realization of the smart cities of the future. In general, AI has proven to be a useful tool for imitation of human intelligence in machines and computers. Similarly, AI is useful in the energy sector, enabling processing of the vast amounts of data produced within an SG, and also coping with the grid’s increasing complexity. In particular, in the renewable energy (RE) sector, AI provides better monitoring, operation, maintenance, and storage of the electrical energy produced. Consequently, the contributions of AI to the RE sector can be summarized in the following, and are illustrated in Figure 6[79,80]: ● Energy generation while considering supply volatility ● Grid stability and reliability ● Grid demand and weather forecasting ● Grid demand-side management ● Energy storage operations ● Market design and management The key applications in RE systems are summarized as follows: ● Smart matching of supply and demand ● Intelligent storage ● Centralized control systems ● Smart microgrids Future power systems can support the incorporation of renewable energy resources (RERs) by using SG technologies. With high penetration of distributed generation into power systems and advancements in ICT associated with customer data, the electric power grid can be transformed. AI-enabled smart energy markets can make it simpler to establish effective policy incentives and allow both consumers and utility companies to make decisions about their own consumption and generation in a way that reduces CO2 emissions. Designing automation technologies for heterogeneous devices that can learn to adjust their consumption vs. pricing signals with user constraints, creating a means of communication between humans and the controllers, and creating simulation and prediction tools for consumers are among the challenges that face AI in electrical power systems. Intelligent tools and methods are required to manage the system appropriately and to make timely choices as the energy sector becomes more complex. The problems of classification, forecasting, networking, optimization, and control methods can be solved using artificial neural networks (ANNs), reinforcement learning (RL), genetic algorithms (GAs), and multi-agent systems. Because of the lack of sufficiently sophisticated automatically controlled resources, many system operations are still conducted manually or with only the most basic automation. However, introduction of AI into the grid system would lead to breakthroughs and provide new directions for development of the electrical grid. Figure 7 illustrates the entire distributed SG concept with AI methods and the use of techniques to provide cost savings and perform optimization. To optimize the controllable loads, Atef and Eltawil described a GA for management of standalone microgrids (MGs). AI techniques are now providing considerably more effective and powerful ways to deal with the limitations of conventional grid systems as a result of advancements in computer power and the ready availability of data storage. Additionally, several security issues have arisen as a result of the application of distributed computing algorithms in SGs. Threats including physical attacks and cyberattacks can result in infrastructure failure, privacy breaches, service disruption, and denial of service (DoS). As a result of the availability of sufficient customer data, computing power, and potential training algorithms, AI has now developed to the point where it can even predict customer electricity prices in complex environments. A comparative analysis of such intelligent schemes that concentrated on deep learning (DL) and support vector regression (SVR) has been presented in the literature. In addition, demand response (DR) represents the deviation of the end users’ normal electricity consumption patterns from those suggested by the utility, or the use of financial incentives to prevent the reliability of the power system being jeopardized as a result of peak demand. Recently, several research works available in the online literature have concentrated on the use of AI techniques to predict energy demand patterns[91,92]. To overcome the uncertainty in future electricity prices, Al-Fuqaha et al. proposed an hour-ahead DR algorithm that used RL and an ANN and also took user comfort and consumption behavior into account. The various energy management system types for use in SGs with the enabling techniques discussed in this paper are summarized in Figure 8. It has become challenging for grids to control the demand for electricity for both household and industrial uses as a result of rapid population growth and the expansion of various industries. Short circuits and transformer failures are two issues caused by the increased demand for electricity at specific times of the day. To deliver electricity effectively, it is necessary to predict the customer consumption patterns to address the problems with traditional grids for electricity transmission. To that end, the concept of the SG has been introduced. An SG can transmit electricity based on the anticipated demand by using its intelligence to predict electricity demand. An SG can address many of the issues faced by traditional grids, including demand forecasting, reduction of power consumption, and reduction of the risks of short circuits, thus preventing the loss of lives and property. The true potential of SGs has been unlocked by technological advancements such as the IoT, fifth generation wireless networks and beyond (5G), big data analytics, and ML. As shown in Figure 9, an SG has multiple stakeholders and can be connected to several other smart areas, including smart cities, buildings, vehicles, and power plants. Hardware, software, and services can be integrated into a distribution platform known as “energy-as-a-service” (EaaS). Such a solution should promote use of decentralized supply sources and renewable energy, provide demand control and energy storage technologies, and maximize the equilibrium between supply and demand[99,100]. This business model can also be applied to the SG. Customers can use the electricity and also generate, distribute, and trade resources with other consumers thanks to the SG, which enables bilateral communication and data transfer between electricity customers and the power grid. Cost and capacity are the two factors that define energy production. However, a rise in the price is often seen when the demand exceeds certain capacity thresholds. A trade-off between capacity, price, and consumer demand-satisfaction results from taking rising worldwide demand into account and using the available capacity in the best possible way at the lowest possible cost. As a result, energy demand consumption optimization, or smart usage, becomes crucial. Additionally, energy utilities are altering their business models by providing clients with energy-related services via energy service contracts, and thus raising the value provided. The business model used in this approach, which is based on the current business-to-business (B2B) strategy, is the provision of an energy-oriented IPSS. Provision of energy can be regarded as a product service, with the contract acting as both a tangible good and a collection of intangible services. Through this collaboration, the energy provider and the customers gain mutually beneficial outcomes. To that end, a system architecture proposed in the literature is depicted in Figure 10. Figure 10. Proposed IPSS architecture for intelligent energy management. EDM: Energy demand management; ESC: energy sales company. In this architecture, an ESC has a particular pricing strategy with distinct tariffs and is seeking to build an ecosystem to offer dynamic energy pricing. An EDM service, which is fed with the “production-consumption profiles” of each individual industry within the ecosystem, is necessary to accomplish that aim. Visualization and monitoring of the manufacturers’ infrastructures, or the virtual circuit, through a wireless sensor network (WSN) that allows the EDM to know when, where, and why energy is consumed represents an innovation point of the proposed methodology. The production equipment is equipped with wireless data acquisition devices (DAQs) that transmit data to a cloud server. This part of the service’s primary function aims to offer insights into the energy consumption of the machines and to connect that information to the energy consumption profile of the corresponding customer. As a result, data from the machines can be used to predict future energy demand, thus enabling identification of energy grid peaks. An alert is sent then to a specific group of high load industries as soon as a peak has been identified, requesting that they shift their load to smooth the estimated peak. However, if a company disregards the alert, they will be charged according to a high demand period tariff until they follow the suggested instructions for the method. As a result, an adaptive scheduling algorithm is activated on the manufacturer’s side to guarantee grid stability and reduce the necessary energy consumption during peak demand periods. The data collected from the customers not only helps ESCs produce better predictions of their customer needs and increase the system’s efficiency, but it also helps them to cut costs by managing the energy demand directly. The proposed IPSS system architecture shown in Figure 9 summarizes and presents the relevant steps. Mocanu et al. proposed an IPSS framework for the energy sector to create a smart service-based energy ecosystem, as illustrated in Figure 11. The proposed framework was validated by performing a real-world case study with a European electricity distribution company. The framework above is based on utilization of energy services that can be used by both the energy suppliers and their customers. The proposed services are as follows: a power outage planner, an energy demand manager, and a calculator for the Environmental Impact (EIC). To enable operation of these services, customers must provide real-time and historical consumption data. The data are gathered from smart meters that have been installed in the customers’ facilities. Depending on the embedded sensors, e.g., in the lighting virtual circuit, the cooling virtual circuit, or a machine virtual circuit, all the customer facilities are discretized into virtual circuits that represent some of the factory structures. Every customer will have a smart meter that measures their consumption in kWh and this meter is installed at the enterprise level and on every virtual circuit. The virtual circuit meters take their measurements in real time, while the enterprise lever meters take measurements every 15 min. The virtual circuit is a branch of the customer passport for each customer. The architecture is divided in two flows: one that is based on the information about electricity use from the smart meters, and another that is based on information about the grid power outages obtained from interactions between the suppliers and the customers. To generate an image of each customer’s environmental impact, the EIC service converts each customer’s consumption into emissions. Diagrams that show the quantities of the emissions from each customer can also be generated by this service. The supplier then notifies the customers of any power outages caused by grid issues through the power interruption planner and schedules scheduled maintenance in conjunction with the customers who will be affected. Every customer provides an estimate of how much energy will be used by each virtual circuit. When a submission is made, the supplier then verifies that the combined consumption of all customers in each individual grid segment is less than the peak level. If it is not, the supplier then alerts the customers of that section, suggests a schedule change, and then asks them to recheck to see if the power consumption has fallen below the peak level. If so, the supplier does not offer any additional recommendations. The third service, which is called EDM, is composed of these actions. The web-based platform on which all services are built relies on communication between the ESC and the energy consumer-industrial SME customer. Because the services are dependent on customer input, ongoing customer involvement, and the permission to install meters in their facilities, customer cooperation is essential. The electrical system of the energy client, the installed sensor systems, and an IT system infrastructure that gathers the data from the sensor systems comprise the energy IPSS. According to Gartner, 20.8 billion connected devices were in use worldwide in 2020 and this number is predicted to hit approximately 29 billion by the end of 2022, with approximately 18 billion of these devices being related to the IoT. The SG, as a major consumer of autonomous connected devices, not only uses millions of IoT devices but also processes enormous amounts of data to improve its understanding of the SG network. On a global scale, approximately 23 million smart meters were installed and used in 2019 and this number is projected to reach 188 million by 2025, representing a compound annual growth rate (CAGR) of 6.6% during the forecast period. To increase the effectiveness of power networks, SGs are being implemented worldwide. IoT and big data analytics are essential components of the SG. Therefore, it is imperative to integrate ML with IoT sensors and devices at various levels in the SG to enable analysis of the entire ecosystem and optimization of the important parameters, e.g., cost and energy resource balance, and ultimately to form an intelligent SG, as shown in Figure 12. Application of the IoT in the SG may fall into one or more of the following categories: 1. IoT is applied to use different IoT smart devices for monitoring of the equipment status 2. IoT is applied to collect information from equipment with the support of its linked IoT smart sensors and devices through a diverse range of communication tools 3. IoT is applied to supervise the SG across the application interfaces The major issues that traditional electric grids and the conventional method for electricity distribution have faced have seen significant improvement as a result of the integration of SG technology. For handling of high-dimensional data and ensuring efficiency in the data transactions throughout the energy supply chain, SG technology uses ML techniques, and more specifically, the subset of DL approaches. Additionally, through skillful control of consumer power consumption and by enabling energy-sharing facilities, SG technology places a strong emphasis on consumer satisfaction, thus transforming the consumer into a prosumer (producer and consumer). The SG still has some unresolved demand response (DR) management problems and upcoming difficulties that must be resolved for improvement of future electricity requirements. This section discusses the various research challenges, the current problems, and future directions for SG technology. Some of the most prevalent challenges and the corresponding technical solutions are summarized in Table 5. Research challenges and solutions in the smart grid field |Challenge||Application||Solution||Advantages||Reference| |Dynamic demand| pricing and consumer energy consumption scheduling |Reinforcement learning (RL) based multiagent learning algorithm| in microgrids |The consumer and service provider both learn the strategy without having any prior knowledge of the microgrid’s dynamics||improved pricing options for the service provider and efficient energy consumption planning||| |Consumer energy consumption scheduling||Deep learning (DL), RL, convolutional neural networks (CNN), Q-learning and Recurrent neural network (RNN) in| consumption scheduling at small commercial and residential buildings |Calculating each appliance’s energy usage to address the consumer energy consumption scheduling||Scheduling of consumer energy consumption||| |Cyber attacks||RNN and blockchain (hash and short| signatures) for energy exchange |During trading, fault-tolerant energy transactions identify intrusions||Energy trading that is secure, fault-tolerant, protects privacy, and has high throughput||| |Equipment health| monitoring |Wind turbine condition monitoring with DL||Unsupervised learning algorithms are more prevalent than supervised learning algorithm||Effective maintenance||| |Electric vehicle (EV) charging in SG||DNN for charging of EVs||Makes real-time charging decisions| based on historical data on connections |Reduced EV charging cost||| |Load forecast in| SGs |ANN, CNN, stochastic models for| building energy prediction |Energy consumption prediction taking into account many dynamically changing parameters||Accurate energy load prediction||| The SG 2.0 grid architecture is composed of four layers, as illustrated in Figure 13, which also shows the fundamental elements of each layer. Specifically, the SG2.0 architecture comprises (i) a physical component layer; (ii) a communication and control layer; (iii) an application layer; and (iv) a data analysis layer. The physical component layer embraces the sensory devices that enable data collection for real-time monitoring and decision-making, including IoT devices [e.g., smart meters, smart loads, smart sensors, phasor measurement units (PMUs), remote terminal units (RTUs), current transformers (CTs), and voltage transformers (VTs)]. Shared data and information can be used to perform real-time monitoring and control of Smart Grid 2.0 thanks to the communication infrastructure and Internet-based protocols. With a low level of assistance from a third-party service provider, the communication and control layer enables fast, dependable, and The application layer of the future Smart Grid 2.0 comprises the services related to electric vehicles (EVs) for trading with charging stations through P2P energy transactions, microgeneration and large-scale power plants that incorporate renewable energy generation, battery storage, and automated, efficient, and dependable transmission/distribution networks. The data analysis layer aids in performing cloud-centric data management, trend analysis, and grid control at an underlying level. The Smart Grid 2.0 infrastructure includes data management, secure data routing, privacy preservation, and dependable storage. The AMI manages the data that are derived from all the above layers because it collects synchronized smart meter measurements from both consumer and prosumer locations and is connected to the communication network. An SG can be realized as a set of technologies that improve both existing (and new) electricity distribution networks by the provision of intelligence, which in turn promotes better usage of the electrical energy and higher distribution efficiency. The application of detection, measurement, and control devices with suitable communication channels to all parties involved in the electricity production, transmission, distribution, and consumption processes makes the intelligent network possible, and allows users, operators, and the automated devices to receive information with regard to the status of the network and to respond dynamically to changes in the condition/status of the grid. Although several benefits with regard to the implementation of SGs are listed below, certain limitations are also emerging[113,114]. ● Increased reliability ● Operational efficiency and optimization of investment ● Network operation and planning ● Variations in the cost of energy ● SGs offer the potential to reduce electricity consumption by 30% ● Increased user responsiveness with personalized consumption ● Intelligent infrastructure technology for global energy distribution that could reduce greenhouse gas emissions from the energy sector ● Increased cost caused by the replacement of analog meters with more sophisticated smart meters ● Lack of regulatory standards for SG technology ● Lack of official technology documentation In this manuscript, a literature review of the current state of the art in the field of electrical energy generation and distribution has been presented and discussed. As part of this discussion, important concepts such as Energy 4.0, Energy 5.0, and SGs have been described. Among the key findings of the literature investigation, it was evident that the modern societal, academic, and industrial worlds are converging toward the design, development, and implementation of sustainable solutions to reduce environmental pollution and their environmental footprint. More specifically, several government organizations have presented and continue to work on such initiatives, with the focus on long-term and incremental implementation of technologies and techniques that were mainly introduced and developed within the framework of Industry 4.0. The contribution of the manuscript also extended to detailed discussion of the technical frameworks required for integration of SGs into modern society and industry. These frameworks are based on previous implementations for smart and intelligent energy distribution management systems that were mainly used in the manufacturing domain. These implementations have created information islands that can enable the development of a broader SG by providing additional data. However, given that the frameworks above can be elaborated further, the creation of a wider SG that goes beyond the industrial sector is feasible. Following consideration of the recent technological advances discussed in the preceding paragraphs along with the opportunities arising in the field, future work will be focused on further expansion of the existing frameworks to allow them to work in collaboration and form an industrial SG. Then, connection of the industrial SG with the academic SG proposed by the authors in a recent research work (not available online to date) will follow, with the aim of expansion of the SG and the functionalities provided to the energy producers. Furthermore, more experimental tests will be required to combine the energy demands of clients on different tiers, to distribute the electrical energy more efficiently, and wherever possible to minimize the load to the grid and support the use of electrical energy derived from alternative power sources. One of the most important problems faced in the current grids is the lack of a suitable infrastructure to transform them into SGs. In this context, further elaboration of the existing frameworks will be required. Conceptualization, supervision: Mourtzis D Conceptualization, writing-reviewing and editing, methodology, software: Angelopoulos J Conceptualization, writing-reviewing and editing, software: Panopoulos NAvailability of data and materials Not applicable.Financial support and sponsorship None.Conflicts of interest All authors declared that there are no conflicts of interest.Ethical approval and consent to participate Not applicable.Consent for publication Not applicable.Copyright © The Author(s) 2022. 1. Jnr B, Abdul Majid M, Romli A. Emerging case oriented agents for sustaining educational institutions going green towards environmental responsibility. J Syst Inf Technol 2019;21:186-214.DOI 2. Hrovatin N, Cagno E, Dolšak J, Zorić J. How important are perceived barriers and drivers versus other contextual factors for the adoption of energy efficiency measures: an empirical investigation in manufacturing SMEs. J Clean Prod 2021;323:129123.DOI 3. European Commission, directorate-general for climate action. Going climate-neutral by 2050 : a strategic long-term vision for a prosperous, modern, competitive and climate-neutral EU economy. Available from: https://data.europa.eu/doi/10.2834/02074 [Last accessed on 26 Dec 2022]. 4. Center for Climate and Energy Solutions. Getting to zero: A U.S. climate agenda. Available from: https://www.c2es.org/site/assets/uploads/2019/12/C2ES-Getting-to-Zero-summary-report.pdf [Last accessed on 26 Dec 2022]. 5. Chen B, Fæste L, Jacobsen R, Teck Kong M, Dylan Lu D, Palme T. How China can achieve carbon neutrality by 2060. Available from: https://www.bcg.com/publications/2020/how-china-can-achieve-carbon-neutrality-by-2060. [Last accessed on 26 Dec 2022]. 6. Mourtzis D, Angelopoulos J, Panopoulos N. Mourtzis D, Angelopoulos J, Panopoulos N. Digital manufacturing: the evolution of traditional manufacturing toward an automated and interoperable Smart Manufacturing Ecosystem. The Digital Supply Chain. Elsevier; 2022. pp. 27-45.DOI 7. Leng J, Sha W, Wang B, et al. Industry 5.0: prospect and retrospect. J Manuf Syst 2022;65:279-95.DOI 8. Huang S, Wang B, Li X, Zheng P, Mourtzis D, Wang L. Industry 5.0 and Society 5.0 - comparison, complementation and co-evolution. J Manuf Syst 2022;64:424-8.DOI 9. Mourtzis D, Boli N, Alexopoulos K, Różycki D. A framework of energy services: from traditional contracts to product-service system (PSS). Procedia CIRP 2018;69:746-51.DOI 10. IEA. World energy outlook 2020. Available from: https://www.iea.org/reports/world-energy-outlook-2020 [Last accessed on 26 Dec 2022]. 11. European Commission. Gas and electricity market reports 2022. Available from: https://energy.ec.europa.eu/data-and-analysis/market-analysis_en [Last accessed on 26 Dec 2022]. 12. Eck NJ,Waltman L. Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 2010;84:523-38.DOI 13. Cin E, Carraro G, Volpato G, Lazzaretto A, Danieli P. A multi-criteria approach to optimize the design-operation of Energy Communities considering economic-environmental objectives and demand side management. Energy Convers Manag 2022;263:115677.DOI 14. Sachs J, Kroll C, Lafortune G, Fuller G, Woelm F. Sustainable development report 2021. Cambridge University Press; 2021. 15. Mourtzis D. Towards the 5th industrial revolution: a literature review and a framework for process optimization based on big data analytics and semantics. J Mach Eng 2021;21:5-39.DOI 16. IEA. Industry direct CO2 emissions in the Sustainable Development Scenario, 2000-2030. Available from: https://www.iea.org/data-and-statistics/charts/industry-direct-co2-emissions-in-the-sustainable-development-scenario-2000-2030 [Last accessed on 26 Dec 2022]. 17. Nagasawa T, Pillay C, Beier G, et al. Accelerating clean energy through industry 4.0 manufacturing the next revolution. Available from: https://www.unido.org/sites/default/files/2017-08/REPORT_Accelerating_clean_energy_through_Industry_4.0.Final_0.pdf [Last accessed on 26 Dec 2022]. 18. Singh R, Akram SV, Gehlot A, Buddhi D, Priyadarshi N, Twala B. Energy system 4.0: digitalization of the energy sector with inclination towards sustainability. Sensors 2022;22:6619.DOI 19. Carayannis EG, Draper J, Bhaneja B. Towards fusion energy in the industry 5.0 and society 5.0 context: call for a global commission for urgent action on fusion energy. J Knowl Econ 2021;12:1891-904.DOI 20. Mourtzis D, Vlachou E. A cloud-based cyber-physical system for adaptive shop-floor scheduling and condition-based maintenance. J Manuf Syst 2018;47:179-98.DOI 21. Strielkowski W, Rausser G, Kuzmin E. Digital revolution in the energy sector: effects of using digital twin technology. In: Kumar V, Leng J, Akberdina V, Kuzmin E, editors. Digital transformation in industry. Cham: Springer International Publishing; 2022. pp. 43-55.DOI 22. Lang M. From industry 4.0 to energy 4.0. future business, models and legal relations. Available from: http://wise.co.th/wise/References/Creative_Economy/From_Industry_4_to_Energy_4.pdf [Last accessed on 26 Dec 2022]. 23. Vineetha CP, Babu CA. Smart grid challenges, issues and solutions. In 2014 International Conference on Intelligent Green Building and Smart Grid (IGBSG); 2014. pp. 1-4.DOI 24. Kupzog F, King R, Stefan M. The role of IT in energy systems: the digital revolution as part of the problem or part of the solution. Elektrotech Inftech 2020;137:341-5.DOI 25. Mourtzis D, Angelopoulos J, Panopoulos N. A literature review of the challenges and opportunities of the transition from industry 4.0 to society 5.0. Energies 2022;15:6276.DOI 26. Moreno Escobar JJ, Morales Matamoros O, Tejeida Padilla R, et al. A comprehensive review on smart grids: challenges and opportunities. Sensors 2021;21:6978.DOI 27. Mourtzis D, Angelopoulos J, Panopoulos N. A collaborative approach on energy-based offered services: energy 4.0 ecosystems. Procedia CIRP 2021;104:1638-43.DOI 28. Mourtzis D, Angelopoulos J, Panopoulos N. Development of a PSS for smart grid energy distribution optimization based on digital twin. Procedia CIRP 2022;107:1138-43.DOI 29. Jacobson MZ. 100% Clean, Renewable Energy and Storage for Everything. New York: Cambridge University Press; 2020. pp. 427. Available from: https://web.stanford.edu/group/efmh/jacobson/WWSBook/WWSBook.html [Last accessed on 26 Dec 2022]. 30. Greer C, Wollman D A, Prochaska D, et al. NIST framework and roadmap for smart grid interoperability standards, release 3.0. 2014.DOI 31. Heffner G. Smart grid-smart customer policy needs. Available from: https://www.ctc-n.org/sites/www.ctc-n.org/files/resources/sg_cust_pol.pdf [Last accessed on 26 Dec 2022]. 32. Spelman M. How will the digital revolution transform the energy sector? Available from: https://www.weforum.org/agenda/2016/03/how-will-the-digital-revolution-transform-the-energy-sector/ [Last accessed on 26 Dec 2022]. 33. Stem Inc. Reduce global adjustment charges with the world leader in energy storage. Available from: https://www.convergentep.com/canada-guarantee/ [Last accessed on 26 Dec 2022]. 34. Angelopoulos J, Mourtzis D. An intelligent product service system for adaptive maintenance of engineered-to-order manufacturing equipment assisted by augmented reality. Appl Sci 2022;12:5349.DOI 35. Song EY, FitzPatrick GJ, Lee KB, Griffor E. A methodology for modeling interoperability of smart sensors in smart grids. IEEE Trans Smart Grid 2022;13:555-63.DOI 36. Alonso M, Amaris H, Alcala D, Florez R DM. Smart sensors for smart grid reliability. Sensors 2020;20:2187.DOIPubMed PMC 37. United Nations. Transforming our world: the 2030 agenda for sustainable development. Available from: https://sdgs.un.org/2030agenda [Last accessed on 26 Dec 2022]. 38. United Nations. Sustainable development knowledge platform. Available from: https://sustainabledevelopment.un.org [Last accessed on 26 Dec 2022]. 39. Grijpink F, Kutcher E, Ménard A, et al. Connected world: an evolution in connectivity beyond the 5G revolution. Available from: https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/connected-world-an-evolution-in-connectivity-beyond-the-5g-revolution [Last accessed on 26 Dec 2022]. 40. Hui H, Ding Y, Shi Q, Li F, Song Y, Yan J. 5G network-based internet of things for demand response in smart grid: a survey on application potential. Appl Energy 2020;257:113972.DOI 41. Thollander P, Paramonova S, Cornelis E, et al. International study on energy end-use data among industrial SMEs (small and medium-sized enterprises) and energy end-use efficiency improvement opportunities. J Clean Prod 2015;104:282-96.DOI 42. Smartgrid.gov. US department of energy’s office of electricity delivery and energy reliability. Available from: https://www.smartgrid.gov/ [Last accessed on 26 Dec 2022]. 43. Strbac G. Demand side management: benefits and challenges. Energy Policy 2008;36:4419-26.DOI 44. Wu Z, Xia X. A portfolio approach of demand side management. IFAC-PapersOnLine 2017;50:171-6.DOI 45. Karanjkar N, Joglekar A, Mohanty S, Prabhu V, Raghunath D, Sundaresan R. Digital twin for energy optimization in an SMT-PCB assembly line. In 2018 IEEE International Conference on Internet of Things and Intelligence System (IOTAIS); 2018. pp. 85-89.DOI 46. Zhou M, Yan J, Feng D. Digital twin and its application to power grid online analysis. CSEE J Power Energy Syst 2019;5:391-8.DOI 47. Stavropoulos P, Mourtzis D. Digital twins in industry 4.0. design and operation of production networks for mass personalization in the era of cloud technology. Elsevier; 2022. pp. 277-316.DOIPubMed PMC 48. Francisco A, Mohammadi N, Taylor JE. Smart city digital twin-enabled energy management: toward real-time urban building energy benchmarking. J Manage Eng 2020:36.DOI 49. Srinivasan RS, Manohar B, Issa RRA. Urban building energy CPS (UBE-CPS): real-time demand response using digital twin. In: Anumba CJ, Roofigari-esfahan N, editors. Cyber-Physical Systems in the Built Environment. Cham: Springer International Publishing; 2020. pp. 309-22.DOI 50. Yu W, Patros P, Young B, Klinac E, Walmsley TG. Energy digital twin technology for industrial energy management: classification, challenges and future. Renew Sust Energ Rev 2022;161:112407.DOI 51. Mourtzis D, Milas N, Athinaios N. Towards machine shop 4.0: a general machine model for CNC machine-tools through OPC-UA. Procedia CIRP 2018;78:301-6.DOI 52. Mourtzis D, Angelopoulos J, Panopoulos N. Design and development of an IoT enabled platform for remote monitoring and predictive maintenance of industrial equipment. Procedia Manuf 2021;54:166-71.DOI 53. Onile AE, Machlev R, Petlenkov E, Levron Y, Belikov J. Uses of the digital twins concept for energy services, intelligent recommendation systems, and demand side management: a review. Energy Rep 2021;7:997-1015.DOI 54. Hamwi M, Lizarralde I. A review of business models towards service-oriented electricity systems. Procedia CIRP 2017;64:109-14.DOI 55. Meier H, Völker O, Funke B. Industrial product-service systems (IPS2): paradigm shift by mutually determined products and services. Int J Adv Manuf Technol 2011;52:1175-91.DOI 56. George G, Bock J. A.The business model book: design, build and adapt business ideas that drive business growth. 1st ed. United Kingdom: Pearson; 2017. 57. Annarelli A, Battistella C, Nonino F. Product service system: a conceptual framework from a systematic review. J Clean Prod 2016;139:1011-32.DOI 58. Catulli M, Cook M, Potter S. Consuming use orientated product service systems: a consumer culture theory perspective. J Clean Prod 2017;141:1186-93.DOI 59. Zhang H, Zhao F, Sutherland JW. Energy-efficient scheduling of multiple manufacturing factories under real-time electricity pricing. CIRP Ann 2015;64:41-4.DOI 60. Mourtzis D, Angelopoulos J, Panopoulos N. A collaborative approach on energy-based offered services: energy 4.0 ecosystems. Procedia CIRP 2021;104:1638-43.DOI 61. Unterberger E, Eisenreich F, Reinhart G. Design principles for energy flexible production systems. Procedia CIRP 2018;67:98-103.DOI 62. Tsaousoglou G, Efthymiopoulos N, Makris P, Varvarigos E. Personalized real time pricing for efficient and fair demand response in energy cooperatives and highly competitive flexibility markets. J Mod Power Syst Clean Energy 2019;7:151-62.DOI 63. Mourtzis D, Vlachou E, Milas N, Dimitrakopoulos G. Energy consumption estimation for machining processes based on real-time shop floor monitoring via wireless sensor networks. Procedia CIRP 2016;57:637-42.DOI 64. Keller F, Reinhart G. Systematic approach for energy-supply-orientated production planning. Int J Ind Manuf Eng 2015;9:2417-22.DOI 65. Biel K, Glock CH. Systematic literature review of decision support models for energy-efficient production planning. Comput Ind Eng 2016;101:243-59.DOI 66. Zhang Q, Grossmann IE. Planning and scheduling for industrial demand side management: advances and challenges. In: Martín M, editor. Alternative energy sources and technologies. Springer, Cham; 2016.DOI 67. Curiale M. From smart grids to smart city. 2014 Saudi Arabia Smart Grid Conference (SASG); 2014, pp. 1-9.DOI 68. Bellekom S, Arentsen M, van Gorkum K. Prosumption and the distribution and supply of electricity. Energ Sustain Soc 2016:6.DOI 69. Anthony B, Petersen SA, Ahlers D, Krogstie J, Livik K. Big data-oriented energy prosumption service in smart community districts: a multi-case study perspective. Energy Inform 2019;2:36.DOI 70. Hussain HM, Narayanan A, Nardelli PHJ, Yang Y. What is energy internet? IEEE Access 2020;8:183127-45.DOI 71. Cao J, Yang M. Energy internet - towards smart grid 2.0. 2013 Fourth International Conference on Networking and Distributed Computing, 2013, pp. 105-110.DOI 72. Shahinzadeh H, Moradi J, Gharehpetian GB, Nafisi H, Abedi M. Internet of energy (IoE) in smart power systems. 2019 5th Conference on Knowledge Based Engineering and Innovation (KBEI); 2019. pp. 627-36.DOI 73. Kabalci E, Kabalci Y. Introduction to smart grid and internet of energy systems. From smart grid to internet of energy. Elsevier; 2019. pp. 1-62.DOI 74. Gopstein A, Nguyen C, O'Fallon C, Hastings N, Wollman D. NIST framework and roadmap for smart grid interoperability standards, release 4.0. Department of commerce. National Institute of Standards and Technology; 2021.DOI 75. Dehalwar V, Kolhe ML, Deoli S, Jhariya MK. Blockchain-based trust management and authentication of devices in smart grid. Clean Eng Technol 2022;8:100481.DOI 76. Guo Y, Wan Z, Cheng X. When blockchain meets smart grids: a comprehensive survey. High-Confidence Comput 2022;2:100059.DOI 77. Yapa C, de Alwis C, Liyanage M, Ekanayake J. Survey on blockchain for future smart grids: technical aspects, applications, integration challenges and future research. Energy Rep 2021;7:6530-64.DOI 78. Serban AC, Lytras MD. Artificial intelligence for smart renewable energy sector in europe - smart energy infrastructures for next generation smart cities. IEEE Access 2020;8:77364-77.DOI 79. Das UK, Tey KS, Seyedmahmoudian M, et al. Forecasting of photovoltaic power generation and model optimization: a review. Renew Sust Energ Rev 2018;81:912-28.DOI 80. Ssekulima EB, Anwar MB, Al Hinai A, El Moursi MS. Wind speed and solar irradiance forecasting techniques for enhanced renewable energy integration with the grid: a review. IET Renew Power Gener 2016;10:885-989.DOI 81. Bhandari B, Lee K, Lee G, Cho Y, Ahn S. Optimization of hybrid renewable energy power systems: a review. Int J Precis Eng Manuf-Green Tech 2015;2:99-112.DOI 82. Dawoud SM, Lin X, Okba MI. Hybrid renewable microgrid optimization techniques: a review. Renew Sust Energ Rev 2018;82:2039-52.DOI 83. Javaid N, Hafeez G, Iqbal S, Alrajeh N, Alabed MS, Guizani M. Energy efficient integration of renewable energy sources in the smart grid for demand side management. IEEE Access 2018;6:77077-96.DOI 84. Ramos C, Liu C. AI in power systems and energy markets. IEEE Intell Syst 2011;26:5-8.DOI 85. Neves D, Pina A, Silva CA. Comparison of different demand response optimization goals on an isolated microgrid. Sustain Energy Technol Assess 2018;30:209-15.DOI 86. Pearson IL. Smart grid cyber security for Europe. Energy Policy 2011;39:5211-8.DOI 87. Atef S, Eltawil A. A comparative study using deep learning and support vector regression for electricity price forecasting in smart grids. In Proceedings of the IEEE 6th International Conference on Industrial Engineering and Applications (ICIEA); 2019, pp. 603-7.DOI 88. Qdr Q. Benefits of demand response in electricity markets and recommendations for achieving them. Available from: http://www.madrionline.org/wp-content/uploads/2017/02/doe_2006_dr_benefitsrecommendations.pdf [Last accessed on 26 Dec 2022]. 89. Ahmad T, Chen H. Utility companies strategy for short-term energy demand forecasting using machine learning based models. Sustain Cities Soc 2018;39:401-17.DOI 90. Ahmad T, Chen H, Shah WA. Effective bulk energy consumption control and management for power utilities using artificial intelligence techniques under conventional and renewable energy resources. Int Journal Electr Power Energy Syst 2019;109:242-58.DOI 91. Lu R, Hong SH, Yu M. Demand response for home energy management using reinforcement learning and artificial neural network. IEEE Trans Smart Grid 2019;10:6629-39.DOI 92. Alazab M, Khan S, Krishnan SSR, Pham Q, Reddy MPK, Gadekallu TR. A multidirectional LSTM model for predicting the stability of a smart grid. IEEE Access 2020;8:85454-63.DOI 93. Al-Fuqaha A, Guizani M, Mohammadi M, Aledhari M, Ayyash M. Internet of things: a survey on enabling technologies, protocols, and applications. IEEE Commun Surveys Tuts 2015;17:2347-76.DOI 94. Mourtzis D, Angelopoulos J, Panopoulos N. Smart Manufacturing and tactile internet based on 5G in industry 4.0: challenges, applications and new trends. Electronics 2021;10:3175.DOI 95. Wang K, Wang Y, Hu X, et al. Wireless big data computing in smart grid. IEEE Wirel Commun 2017;24:58-64.DOI 96. Deepa N, Pham Q, Nguyen DC, et al. A survey on blockchain for big data: approaches, opportunities, and future directions. Future Gener Comput Syst 2022;131:209-26.DOI 97. Mourtzis D. Simulation in the design and operation of manufacturing systems: state of the art and new trends. Int J Prod Res 2020;58:1927-49.DOI 98. Bornstein J. Energy-as-a-service, the lights are on. Is anyone home? Deloitte UK 2019. Available from: https://www2.deloitte.com/content/dam/Deloitte/uk/Documents/energy-resources/deloitte-uk-energy-as-a-service-report-2019.pdf [Last accessed on 26 Dec 2022]. 99. Mourtzis D, Angelopoulos J, Panopoulos N. A survey of digital B2B platforms and marketplaces for purchasing industrial product service systems: a conceptual framework. Procedia CIRP 2021;97:331-6.DOI 100. Matsas M, Pintzos G, Kapnia A, Mourtzis D. An integrated collaborative platform for managing product-service across their life cycle. Procedia CIRP 2017;59:220-6.DOI 101. Mourtzis D, Boli N, Xanthakis E, Alexopoulos K. Energy trade market effect on production scheduling: an industrial product-service system (IPSS) approach. Int J Comput Integr Manuf 2021;34:76-94.DOI 102. Gartner Inc. Gartner says 6.4 billion connected ‘things’ will be in use in 2016, up 30 percent from 2015. Available from: https://www.gartner.com/en/newsroom/press-releases/2015-11-10-gartner-says-6-billion-connected-things-will-be-in-use-in-2016-up-30-percent-from-2015 [Last accessed on 26 Dec 2022]. 103. Navigant Consulting Inc. Navigating the energy transformation: building a competitive advantage for energy cloud 2.0. Available from: https://guidehouse.com/-/media/www/site/events/2016/pdfs/engerati-webinar-gridedge-december-8-2016-final-up.pdf [Last accessed on 26 Dec 2022]. 104. Kim B, Zhang Y, van der Schaar M, Lee J. Dynamic pricing and energy consumption scheduling with reinforcement learning. IEEE Trans Smart Grid 2016;7:2187-98.DOI 105. Mocanu E, Mocanu DC, Nguyen PH, et al. On-line building energy optimization using deep reinforcement learning. IEEE Trans Smart Grid 2019;10:3698-708.DOI 106. Ferrag MA, Maglaras L. DeepCoin: A novel deep learning and blockchain-based energy exchange framework for smart grids. IEEE Trans Eng Manage 2020;67:1285-97.DOI 107. Helbing G, Ritter M. Deep learning for fault detection in wind turbines. Renew Sust Energy Rev 2018;98:189-98.DOI 108. Milas N, Mourtzis D, Tatakis E. A decision-making framework for the smart charging of electric vehicles considering the priorities of the driver. Energies 2020;13:6120.DOI 109. Mocanu E, Nguyen PH, Gibescu M, Kling WL. Deep learning for estimating building energy consumption. Sustain Energy Grids Netw 2016;6:91-9.DOI 110. Manoj P, Kumar YB, Gowtham M, Vishwas DB, Ajay AV. Internet of things for smart grid applications. In advances in smart grid power system. Academic Press; 2021. pp. 159-90.DOI 111. . Stabilization, safety, and security of distributed systems. In: Guerraoui R, Petit F. editors. 11th International Symposium, SSS 2009, Lyon, France. Proceedings. Springer; 2009. 112. Winter T. C. The advantages and challenges of the blockchain for smart grids. Available from: https://repository.tudelft.nl/islandora/object/uuid:e4818a29-3344-4ae1-bd26-97b5a06403ae [Last accessed on 26 Dec 2022]. 113. IEA. Smart grids. Available from: https://www.iea.org/reports/smart-grids [Last accessed on 26 Dec 2022]. 114. Mourtzis D, Panopoulos N, Angelopoulos J, Wang B, Wang L. Human centric platforms for personalized value creation in metaverse. J Manuf Syst 2022;65:653-9.DOI OAE Style Mourtzis D, Angelopoulos J, Panopoulos N. Smart Grids as product-service systems in the framework of energy 5.0 - a state-of-the-art review. Green Manuf Open 2022;1:5. http://dx.doi.org/10.20517/gmo.2022.12 AMA Style Mourtzis D, Angelopoulos J, Panopoulos N. Smart Grids as product-service systems in the framework of energy 5.0 - a state-of-the-art review. Green Manufacturing Open. 2022; 1(1):5. http://dx.doi.org/10.20517/gmo.2022.12 Chicago/Turabian Style Mourtzis, Dimitris, John Angelopoulos, Nikos Panopoulos. 2022. "Smart Grids as product-service systems in the framework of energy 5.0 - a state-of-the-art review" Green Manufacturing Open. 1, no.1: 5. http://dx.doi.org/10.20517/gmo.2022.12 ACS Style Mourtzis, D.; Angelopoulos J.; Panopoulos N. Smart Grids as product-service systems in the framework of energy 5.0 - a state-of-the-art review. Green. Manuf. Open. 2022, 1, 5. http://dx.doi.org/10.20517/gmo.2022.12 Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at [email protected].
https://www.oaepublish.com/gmo/article/view/5342
I joined a conversation this week (on Feb 26) called the “GreenGov Dialogue on Demand Response” that was hosted by the White House Council on Environmental Quality (CEQ). It brought together leaders from government, the private sector, non-profits and academia to identify opportunities to reduce our nation’s periods of peak energy demand, promote a more stable electric grid, and help the Federal Government save energy and money in its operations. Among those who engaged in the dialogue were Terry Boston, the CEO of PJM Interconnection, which is a regional transmission company in Pennsylvania, New Jersey and several other states; and Doyle Beneby, the CEO of CPS Energy, which serves San Antonio, Texas. But of course, when I told my family how I’d spent my morning the first question I got wasn’t “who did you see,” but rather “what’s demand/response???” I’ll try to answer that here and explain why it could become something that’s awesome. To operators of the electric grid, “demand/response” is about making the huge, integrated electric system that delivers power to millions of consumers and businesses be more reliable and cost effective. To grid operators demand/response means that at any moment, in addition to turning on another power plant in order to avoid a system-wide failure, they also have the option of getting customers to cut back on power use. Operators have had this option for decades but it’s been crude, slow and reserved only for emergencies; it’s usually put into motion by the utility making a phone call to a factory owner who had agreed in advance to slow operations in the event of a crisis. In the last few years, however, grid operators have become excited by the possibility of fine-tuning the electric grid in real time, as 90 million U.S. households have subscribed to a fast broadband connection and things like smart, connected thermostats have been invented. Rather than something reserved for an emergency, demand/response using today’s advanced information and communications technology (ICT) can make day-in and day-out grid operations more efficient and resilient and can enable things like electric-vehicle charging stations and home solar panels to be connected to the grid without causing problems for the overall system. But the reason customers might come to see demand/response as awesome is different. For customers, demand/response is about changing their relationship to how they use electricity. Now that they can “see” their thermostat on their smart phone, just like they can see their TV recordings on their DVR, customers have more control. They even can get paid for turning things off in their home or business if they do it at the right time, and if their grid operator has the right demand/response program available. For instance, Terry Boston, the PJM CEO, shared how demand/response had unleashed creative thinking by some of his larger enterprise customers. Drexel University, in Philadelphia, has realized that it can use the millions of books in its campus library as a “thermal storage” device, which the university then has been able to use as an “asset” that generates revenue via payments from PJM. Doyle Beneby, the CPS Energy CEO, shared how mass-market customers are beginning to engage differently with their electricity use because of CPS’ demand/response program. His company has found four different ways to group customers according to their interests. The largest group is customers who just want to save money without thinking about it. They want the demand/response process to be automated for them. Another category – the “survivalists” – is comprised of those who want to be left alone and put their highest value on the ability to control their use. A third segment is environmentally oriented and its members are very motivated to reduce their carbon footprint. The last group in San Antonio is comprised of technophiles who love their gadgets. But all of CPS’ customers – except, perhaps the survivalists – were most motivated by being able to compare their own use with what was the norm in their neighborhood. How does this all tie back to why Federal agencies are interested? There are several ways. Agencies can take advantage of demand/response programs to not only become more efficient but also to be paid for cutting back on power use, just like Drexel University. This will help government budget issues, for sure, but it also will enable Federal agencies to “lead by example,” demonstrating and educating by their behavior how customers in the private sector can change how they use electricity. Also, Federal agencies, in their role as large “enterprise-scale” customers with a lot of buying power, can have an impact on electric utilities by demonstrating there is market demand for state-of-the-art demand/response programs. A demand/response program that routinely pays customers for cutting back on their power use at the right time is not yet available to most consumers. So it’s not really awesome, yet. Federal agencies might be able to change that simply by “leading by example.” Finally, there also are policy changes that Federal policymakers might advocate, such as allowing utilities to provide modest, periodic payments to customers just for installing the sensors and communications equipment needed to participate in a demand/response program. In the terminology of the electric industry, this is called “paying for capacity,” and many state public utility commissions don’t allow for creation of a demand/response capacity market. In short, the ability to use ICT to control and reduce household and commercial electric loads nearly instantaneously, on command, is changing the demand for electricity and creating a desire for more control and for the ability to participate in new ways in the electricity marketplace. Demand/response is one of the ways in which the electric industry is evolving in response. In time, it could be awesome!
https://www.itic.org/news-events/techwonk-blog/greengov-dialogue-on-demand-response-the-future-of-awesome
A smart grid is a modernized electrical grid that uses analogue or digital information and communications technology to gather and act on information, such as information about the behaviours of suppliers and consumers, in an automated fashion to improve the efficiency, reliability, economics, and sustainability of the production and distribution of electricity. Smart grid policy is organized in Europe as Smart Grid European Technology Platform. Policy in the United States is described in 42 U. S. C. ch. 152, subch. IX § 17381. The smart grid will make use of technologies, such as state estimation, that improve fault detection and allow self-healing of the network without the intervention of technicians. This will ensure more reliable supply of electricity, and reduced vulnerability to natural disasters or attack. Although multiple routes are touted as a feature of the smart grid, the old grid also featured multiple routes. Initial power lines in the grid were built using a radial model, later connectivity was guaranteed via multiple routes, referred to as a network structure. Next-generation transmission and distribution infrastructure will be better able to handle possible bidirection energy flows, allowing for distributed generation such as from photovoltaic panels on building roofs, but also the use of fuel cells, charging to/from the batteries of electric cars, wind turbines, pumped hydroelectric power, and other sources. Classic grids were designed for one-way flow of electricity, but if a local sub-network generates more power than it is consuming, the reverse flow can raise safety and reliability issues. The total load connected to the power grid can vary significantly over time. Although the total load is the sum of many individual choices of the clients, the overall load is not a stable, slow varying, increment of the load if a popular television program starts and millions of televisions will draw current instantly. Traditionally, to respond to a rapid increase in power consumption, faster than the start-up time of a large generator, some spare generators are put on a dissipative standby mode. A smart grid may warn all individual television sets, or another larger customer, to reduce the load temporarily (to allow time to start up a larger generator) or continuously (in the case of limited resources). Using mathematical prediction algorithms it is possible to predict how many standby generators need to be used, to reach a certain failure rate. In the traditional grid, the failure rate can only be reduced at the cost of more standby generators. In a smart grid, the load reduction by even a small portion of the clients may eliminate the problem. Peak curtailment/leveling and time of use pricing To reduce demand during the high cost peak usage periods, communications and metering technologies inform smart devices in the home and business when energy demand is high and track how much electricity is used and when it is used. It also gives utility companies the ability to reduce consumption by communicating to devices directly in order to prevent system overloads. Examples would be a utility reducing the usage of a group of electric vehicle charging stations or shifting temperature set points of air conditioners in a city. To motivate them to cut back use and perform what is called peak curtailment or peak leveling, prices of electricity are increased during high demand periods, and decreased during low demand periods. It is thought that consumers and businesses will tend to consume less during high demand periods if it is possible for consumers and consumer devices to be aware of the high price premium for using electricity at peak periods. This could mean making trade-offs such as cycling on/off air conditioners or running dishes at 9 pm instead of 5 pm. When businesses and consumers see a direct economic benefit of using energy at off-peak times, the theory is that they will include energy cost of operation into their consumer device and building construction decisions and hence become more energy efficient. See Time of day metering and demand response. According to proponents of smart grid plans,[who? even without the addition of energy storage. Current network infrastructure is not built to allow for many distributed feed-in points, and typically even if some feed-in is allowed at the local (distribution) level, the transmission-level infrastructure cannot accommodate it. Rapid fluctuations in distributed generation, such as due to cloudy or gusty weather, present significant challenges to power engineers who need to ensure stable power levels through varying the output of the more controllable generators such as gas turbines and hydroelectric generators. Smart grid technology is a necessary condition for very large amounts of renewable electricity on the grid for this reason. Market-enabling The smart grid allows for systematic communication between suppliers (their energy price) and consumers (their willingness-to-pay), and permits both the suppliers and the consumers to be more flexible and sophisticated in their operational strategies. Only the critical loads will need to pay the peak energy prices, and consumers will be able to be more strategic in when they use energy. Generators with greater flexibility will be able to sell energy strategically for maximum profit, whereas inflexible generators such as base-load steam turbines and wind turbines will receive a varying tariff based on the level of demand and the status of the other generators currently operating. The overall effect is a signal that awards energy efficiency, and energy consumption that is sensitive to the time-varying limitations of the supply. At the domestic level, appliances with a degree of energy storage or thermal mass (such as refrigerators, heat banks, and heat pumps) will be well placed to 'play' the market and seek to minimise energy cost by adapting demand to the lower-cost energy support periods. This is an extension of the dual-tariff energy pricing mentioned above. Demand response support Demand response support allows generators and loads to interact in an automated fashion in real time, coordinating demand to flatten spikes. Eliminating the fraction of demand that occurs in these spikes eliminates the cost of adding reserve generators, cuts wear and tear and extends the life of equipment, and allows users to cut their energy bills by telling low priority devices to use energy only when it is cheapest. Currently, power grid systems have varying degrees of communication within control systems for their high value assets, such as in generating plants, transmission lines, substations and major energy users. In general information flows one way, from the users and the loads they control back to the utilities. The utilities attempt to meet the demand and succeed or fail to varying degrees (brownout, rolling blackout, uncontrolled blackout). The total amount of power demand by the users can have a very wide probability distribution which requires spare generating plants in standby mode to respond to the rapidly changing power usage. This one-way flow of information is expensive; the last 10% of generating capacity may be required as little as 1% of the time, and brownouts and outages can be costly to consumers. Latency of the data flow is a major concern, with some early smart meter architectures allowing actually as long as 24 hours delay in receiving the data, preventing any possible reaction by either supplying or demanding devices. Platform for advanced services As with other industries, use of robust two-way communications, advanced sensors, and distributed computing technology will improve the efficiency, reliability and safety of power delivery and use. It also opens up the potential for entirely new services or improvements on existing ones, such as fire monitoring and alarms that can shut off power, make phone calls to emergency services, etc. Provision megabits, control power with kilobits, sell the rest The amount of data required to perform monitoring and switching one's appliances off automatically is very small compared with that already reaching even remote homes to support voice, security, Internet and TV services. Many smart grid bandwidth upgrades are paid for by over-provisioning to also support consumer services, and subsidizing the communications with energy-related services or subsidizing the energy-related services, such as higher rates during peak hours, with communications. This is particularly true where governments run both sets of services as a public monopoly. Because power and communications companies are generally separate commercial enterprises in North America and Europe, it has required considerable government and large-vendor effort to encourage various enterprises to cooperate. Some, like Cisco, see opportunity in providing devices to consumers very similar to those they have long been providing to industry. Others, such as Silver Spring Networks or Google, are data integrators rather than vendors of equipment. While the AC power control standards suggest powerline networking would be the primary means of communication among smart grid and home devices, the bits may not reach the home via Broadband over Power Lines (BPL) initially but by fixed wireless.
https://lawaspect.com/smart-grid-6/
It’s no secret that we’re in a transformative and turbulent time for utilities. Having attended the recent Itron Inspire conference, it is clearer than ever before that although there are several forces driving widespread transformation - changing legislation, shifting consumer expectation and the rise of distributed energy resources among them – the root cause is climate change. The most immediate consequence of climate change is the increasing frequency and intensity of extreme weather events. In the past year, we’ve seen the effect of both extreme heat and cold across the United States. “Once-in-a-lifetime” conditions are now commonplace. But the changing climate is also at the heart of the broader energy transition. In their efforts to reduce emissions, corporations, governments and consumers alike are all looking to their utilities to develop and deploy low-carbon solutions without putting the grid at risk. That means utilities are fighting extreme weather on one hand and trying to enable the energy transition on the other. It’s a new and unfamiliar ask – but there are clear actions for utilities to take charge and weather the storms ahead. To make sure the grid remains stable no matter what, many Distribution System Operators (DSOs) are eyeing next generation Smart Grid solutions to enable the intelligent grid of the future, either for the first time or as a replacement for first-generation technology. This kind of technology is crucial. Not only does it improve grid visibility, it also increases reliability and resiliency by embedding operational flexibility. Both visibility and flexibility are central to enduring, and thriving through, the disruption caused by the energy transition, which is on our doorstep: <<< Start >>> 92% of utility executives surveyed expect extreme weather events to increase and worsen over the next 10 years. 63% of utility executives surveyed think a cyberattack is likely to impact a distribution company, resulting in an interruption to the electricity supply, by 2022. <<< End >>> <<< Start >>> 78% of utility executives surveyed expect the energy transition to trigger a tipping point, after which distribution operations will be significantly impacted and the traditional operating paradigm becomes untenable. <<< End >>> With such significant tipping points ahead, it’s time to match new technology with a more flexible operational approach. <<< Start >>> <<< End >>> Despite the challenges to come, it’s not all “doom and gloom”. There are concrete actions they can take to address their new operating reality. In addition to investing in infrastructure hardening, the approach used for effective operational planning can be extended to address the chronic nature and intensity of weather and security events. Emergency response and outage restoration are both core competencies at any utility’s foundation. When an emergency operating center (EOC) is activated during such an event, the utility centralizes operations, communications, and reporting, pulling resources from internal departments and external mutual aid to affect the most efficient response. As the grid is modernized and digitized, any event that impacts the availability of critical IT infrastructure, such as a cyber-attack, could be added to the same operational response playbook. Evolving this existing capability can empower utilities with the flexibility and resilience they need to address constant disruption, as they develop an adaptive operational response that incorporates emergency management principles to navigate a wider range of operational challenges. Executing new operational plans effectively requires utilities to expand their use of available data and insights. Distributed intelligence solutions deployed at select North American utilities have given them the ability to make more immediate decisions at the premise level. They can also assess data collected from multiple sources over time to make more informed operational decisions. For example, assessing data about power quality during a major event through distributed intelligence could detect partial power situations in three phase and networked situations more effectively. Better real-time insights into the connectivity model could also inform a stronger understanding of nested outages. These would help to inform restoration of both the data network (FAN outages) and the grid (meter/grid device outages). At an aggregated level, short and long-term planning is far more powerful when it feeds off a variety of data sources, including weather data, distributed energy generation and utility asset information (especially performance and SCADA data). Continuing with the example of a major event, the data will help a utility not only predict the way a grid responds during outages, but also how the grid is able to rebuild. The reason distributed intelligence is so powerful is that it enables a utility to isolate an issue and introduce a solution (that’s often self-healing) without an impact on broader operations. This layered intelligence, coupled with Artificial Intelligence and automation, makes it easier to pin-point and quickly resolve issues, and ultimately creates a more resilient grid. As powerful as this combination can be, it also challenges utilities to work across silos. It will be those organizations who match the integration of robust AI and machine-to-machine communication with a focus on human interaction that will be best placed to drive operational flexibility. <<< Start >>> <<< End >>> Change isn’t coming – it’s already here. But that shouldn’t be a cause for concern. From likely weather or security events, to non-traditional market entrants, to an evolving business model, utilities can take concrete steps today to plan for the inevitable change of tomorrow. Incorporating principles from emergency management planning into regular operational planning will help utilities adapt to the increasingly common threats facing their business. Extending the use of AI to drive both short-term action and long-term operations will allow utilities to be more proactive in addressing the changing nature of operations. And finally, continuing to evolve the organizational model to drive increased communication between humans and machines will increase both efficiency and responsiveness, allowing utilities to compete more effectively in an evolving market. Contact us today to discuss how distributed intelligence can empower your business.
https://www.accenture.com/us-en/blogs/accenture-utilities-blog/embedding-resilience
5 Security Risks Facing Utility Companies Today The U.S. utility industry is one of the most important industries in the country and provides water, sewage, energy, and other basic services to the public. This sector employs more than 500,000 people and has a market value of $1.5 trillion. However, utility companies are extremely vulnerable when it comes to security. The sector is a prime candidate for cyberattacks, as well as physical attacks. As our cities become smarter, the pressure on the nation’s energy grid increases and utility security risks become more serious. Utility companies must remain aware and responsive to industry-wide concerns in order to mitigate these threats and keep the nation’s grid secure with advanced technologies and utility surveillance systems. With that being said, properly crafted critical infrastructure surveillance aids in the prevention of theft and other unwanted activity. Securing The Nation’s Utility Sector The utility sector presents a unique security challenge because its physical infrastructure is intertwined with the virtual systems being used to automate processes and provide security. This interdependence is called a physical-cyber convergence, and it presents utility security risks within the utility industry. A disruption of one portion of these systems could very well affect the other, causing loss of power, destruction of equipment, and damage to devices throughout the grid. When considering the problems they will need to address, utility companies should focus on five key areas of risk: Risk #1 – Securing Critical Infrastructure And The Grid The critical infrastructure that makes up the energy sector keeps the nation’s lights on and plays a vital role in our economy. However, our energy and utility infrastructures are experiencing a shift toward the use of smart technologies, which often creates new utility security risks within the sector. Aging operational technologies (such as Industrial Control Systems and SCADA) are prime targets for criminals as they are connected to wider networks. Ransomware, malware, and attack campaigns by savvy cybercrime groups could easily cause mass outages of the nation’s grid. Experts within the field estimate that some components of the country’s energy grid are more than a century old – twice its usable life expectancy of 50 years. As this old infrastructure wears out, it becomes vulnerable to digital threats, especially if the aging technology is being linked to advanced technology without proper upgrades. This means that there may not be much standing between the grid and a crisis. Risk #2 – IoT and Cyberphysicl attacks In the past several years, cyber threats to utility providers have grown in number and sophistication. One of the key reasons for this spike in cyberattacks is the increase in the use of internet-enabled devices and wireless sensor networks by energy and utility providers. Traditional energy systems are based on the use of cyber-physical systems, but advances in technology have introduced the Internet of Things (IoT) and the idea of controlling physical systems through digital methods. Mobile apps have become popular with energy providers, which presents unique utility security risks across the sector. These risks include espionage, data breaches, vandalism, physical damage, and data tampering. As a result of the increased use of these wireless data connection systems, utility providers must adapt their security measures and upgrade systems accordingly. Riks #3 – Automation, AI, And Privacy Advances in technology are being used by the energy sector to streamline processes and operations. These advances include cloud computing, big data, robotics, and artificial intelligence (AI). Such automation certainly creates more efficient procedures, but it also brings about new security and privacy concerns as AI captures sensitive personal information to build optimized systems. The aggregation of all the data collected means new concerns in terms of privacy and requires utility providers to guard against data breaches and cyberattacks in order to protect consumers. Risk #4 – Security Skills Shortage And Employee Training Utility companies have been in operation for decades, and are part of a well-established, traditional industrial sector. Their daily operations included minimal security, but now – with the increase in cyber threats around the globe – they must adapt to an increasingly complicated technology environment that necessitates increased security measures. As the industry changes, utility operators much acknowledge the need for different security teams and be willing to invest in the training of employees to guard against phishing attempts or other insider threats that could derail operations and affect the grid. Risk #5 – Securing The Supply Chain The increasing use of connected services within the supply chain has complicated the delivery and receipt of products across the nation. Utility providers are especially vulnerable to cyberattacks within the supply chain and need to be aware of the unique threats up and down the chain. These threats include the disruption of services provided by power plants and clean energy generators after a ransomware attack; a large-scale disruption of power to customers through a cyberattack that remotely disconnects services; the disruption of substations leading to regional loss of services; and the theft of customer information over an unsecured network. Mitigating Utility Security Risks The cost of upgrading outdated technology is prohibitive to many operators, but the outdated technologies being used have led to security breaches for many utility companies. As utility providers increase their use of technology to automate operational processes, they must keep in mind the importance of putting in place digital defense measures to mitigate utility security risks and invest in the training of all employees to ensure cyber security breaches do not occur. At the same time, it’s worth investing in physical security measures to maintain the integrity of the nation’s energy grid and connected networks. Good physical security at utility sites can help ensure the integrity of sensitive areas such as data centers and substations. By paying attention to both the physical and cyber security measures they have put in place, utility operators can guarantee the security of their data as well as the integrity of the nation’s energy grid. For more information about security options for energy and power plants, click here. Brent CanfieldCEO and Creator of SentryPODS Brent Canfield, CEO, and founder of Smart Digital and SentryPODS, founded Smart Digital in 2007 after completing a nine-year active-duty career with the United States Marine Corps. During the 2016 election cycle, he provided executive protection for Dr. Ben Carson. He has also authored articles for Security Info Watch.
https://www.sentrypods.com/5-security-risks-facing-utility-companies-today/
Implications of COVID-19 for the Electricity Industry The driving force of the economy is energy. The demand for electricity has been significantly reduced due to the recent Coronavirus outbreak. Under these conditions, the energy market is seriously impacted and faced with huge challenges. Electricity demand has fallen sharply as governments around the world were compelled to reduce the business activity in response to minimize the threat of coronavirus, although the structure of the load and the regular load profile has also shifted. The share of renewable energy production has risen as a result of the fall in overall electricity generation. Changes in the power balance situation and increased demand volatility have put higher pressure on system operators, as well as issues with voltage breach and system repair and management difficulties. The electricity sector is greatly impacted, though long-term investment in renewable energy is projected to be steady. US Electricity Demand Analysis During COVID-19 In the United States, from March onwards, natural gas has remained the main source of electricity, though renewable electricity far outpaced the output of coal-fired power plants as the first measures of lockdown were in place and demand declined. Around June, as the rigor of government response softened, natural gas solidified its leading role. Coal and nuclear energy peaked in July and August to equal the increasing demand. They surpassed the production of renewables, which declined in the face of the wind and hydro seasonal downturn. By August, the overall production of energy was much higher than in 2019, around the same time of the year, due to temperatures being higher. Therefore, this increase in demand was satisfied by increasing coal and higher wind generation. The temperature significantly dropped in September, hence cooling demand and overall generation have decreased to lower levels than in 2019, affecting the output of coal power. In October, total generation levels were equal to 2019, and the electricity trend mix was seasonal. The fall in wind and solar production in December contributed to an overall drop in the share of renewables. Compared to the same period in 2019, total energy demand in the United States dropped by 3.8% from January to August 2020. Commercial and industrial demand fell by 6.4% and 9.2%, sequentially, while residential demand grew by 2.4%. Both the coronavirus-related downturn in economic development and the comparatively mild winter heating season have caused the overall decrease. Texas During Covid-19 Along with the out-of-control pandemic, the weather certainly impacts energy use. Texas, for instance, had such a hot summer, hence, regardless the fact that everyone was quarantined indoors, there was a rise in electricity usage caused by the air conditioners in August. Furthermore, according to grid operator the Electricity Reliability Council of Texas (ERCOT), the Coronavirus outbreak, had a limited impact on the power grid in Texas. ERCOT delivers a critical service to Texans, and they took special precautions during these challenging times to ensure the health and welfare of their workers. For grid operators, extra cautionary measures have been taken, including alternating facilities and other practices that promoted social distancing. In order to retain essential operations, ERCOT recognized personnel and suppliers that were expected on-site. Grid technicians, for instance, had to operate on-site. However, for employees and consultants who did not need to be on-site to fulfil their job duties, ERCOT introduced a voluntary work scheme from home. Starting March 30, 2020, a new fund to support customers who needed assistance with their energy bills has been initiated by the Public Utility Commission of Texas. The Texas COVID-19 Electricity Relief Program (CERP) includes 2 options: - Retail Electric Providers (REPs) shall give any residential consumer who demands one, independent of their prior payment history, a deferred payment package. - For residential consumers who have been added to the state’s unemployed and low-income list due to the consequences of COVID-19, REPs must postpone energy disconnections. On a different note, commercial energy use has seen a decline as schools, restaurants, movie theatres, businesses, day care centers and other non-essential facilities closed during the Coronavirus pandemic. In terms of industrial demand, it has not been predicted that will be affected, as staff continued to work from home. However, the shift in work patterns has made it difficult for grid operators to predict demand for every hour of the day. Grid operators have seen themselves analyzing trends that they have never seen before. For the 12-month forward market price for energy in the ERCOT market, there has been little change. There are, however, a variety of variables that come into play when considering that: - Demand: Commercial energy consumers have witnessed the oil price war between Russia and Saudi Arabia, in addition to the Coronavirus pandemic. A downturn in the economy could reduce electricity demand. - Supply: Lower demand might indicate ample supply, suggesting lower long-term rates. - Weather: Power use trends would look much like a residential load for Americans who worked and continue working from home. Since about 50% of residential use is related to heating and cooling, demand would be closely linked to the temperature and weather outside. In the energy exchange markets, the transitions in consumption trends bring volatility. Volatility implies risk and higher costs are brought on by risk. At Lone Star DR, our sole focus is to do irrefutably superior work for our Texas-based ERCOT Demand Response customers. Therefore, rest assured, even though the Coronavirus pandemic is a new form of emergency, being prepared to deal with an emergency is not. Contact us for more information on how you can manage your energy and increase your budget during these challenging times, today.
https://www.lonestardemandresponse.com/blog/implications-of-covid-19-for-the-electricity-industry
By Bob Yinger and Jude Schneider, Southern California Edison, Kevin Clampitt, Pandora Consulting Associates and Megan Remillard, Kenyon College The California electricity grid of the future will look substantially different from the grid of today. By 2050, California wants to have cut its emissions to 80 percent below 1990 levels. By 2020, electric utilities are mandated to obtain one-third of their energy from renewable resources while new residential construction is expected to be zero net energy (ZNE). Four years later, utilities have to install 1,325 megawatts of energy storage; and by 2030, all new commercial construction is expected to be ZNE. In 2009, anticipating these challenges, Southern California Edison (SCE), in partnership with the U.S. Department of Energy (DOE), embarked on the Irvine Smart Grid Demonstration. ISGD was funded through the American Recovery and Reinvestment Act of 2009 (ARRA). The DOE grant was $39.6 million, which SCE matched with in-kind contributions of $26.9 million and with $12.7 million from project partners such as GE and SunPower. ISGD looked at the grid from end to end to see how the power network could promote sustainability by incorporating more renewable resources and energy saving devices and by performing system modifications on both the customer and the utility sides of the meter. A major goal of the demonstration was to understand how to meet the needs of a new generation of energy and environmentally conscious consumers. This holistic approach allowed SCE to demonstrate a broad set of grid capabilities while remaining focused on the end users—22 homeowner participants who were themselves experimenting with the tools to control their energy use and costs. The project team installed ISGD components between March and September of 2013. Field experiments began on July 1, 2013, and continued through June 30, 2015. In the ensuing months, SCE has been analyzing the data and identifying lessons learned. The final technical report will be submitted to DOE in December 2015, but in the meantime, interim results are providing some clear directions for SCE and for the future of grid technology. Project Structure The University Hills neighborhood of Irvine that was selected for the project is on the University of California, Irvine campus (UCI). The neighborhood met the project’s technical criteria for circuit characteristics, geographic considerations, home age and style and transformer capacity. Three blocks of homes that were built more than a decade ago were retrofitted with different sets of energy saving devices and rooftop solar panels (see Table 1). A fourth block served as an experiment control group. ISGD was organized around four domains: Smart Energy Customer Solutions, Next Generation Distribution System, Interoperability and Cybersecurity, and Workforce of the Future. These domains were further divided into eight sub-projects (see Table 2). Smart Energy Customer Solutions This domain comprises customer-facing sub-projects that examined how technical solutions could be applied by real people where they live and work. In the homes, sub-project 1 looked at ZNE and advanced demand response capabilities with various home area network devices. Sub-project 2 examined a solar car shade on the roof of a UCI parking structure that combined 48 kW of solar photovoltaics (PV) with energy storage to supply 20 electric vehicle charging stations in the parking structure below. All the homes received some form of energy storage. Two blocks of homes received their own 4 kW/10 kWh battery unit, capable of balancing the homes’ solar PV output, shifting energy use from on-peak to off-peak periods and storing enough energy to power critical home loads for several hours. Another block received a community energy storage device (25 kW/50 kWh) that was used for similar purposes. The in-home units were first generation and likely less efficient and cost-effective than the energy storage devices expected to arrive in the market over the next year. The community energy storage system will be tested further in future demonstration projects. ISGD also performed experiments that examined the impacts of electric vehicles (EVs) on the grid. Homes in three blocks were outfitted with EV charging stations and these homeowners, through a separate, unaffiliated UCI program, were also lent EVs. One experiment used existing smart meter technology to develop a demand response program for EVs. Smart meter radio communication delivered demand response signals to the homes, throttling or stopping EV charging during periods of peak energy use. This measure would enable customers to allow grid operators to remotely adjust the rate and timing of EV charging in exchange for reduced electricity rates. So, for instance, a car that would be charged overnight, could start charging later (or charge at a slower rate) allowing grid operators to pause or slow EV charging to prevent transformer overloads or to use surplus renewable generation. These demand response capabilities were combined with control of the customer’s air conditioner, home energy storage and smart appliances to show how customers could control their energy use to help reduce their costs. The solar car shade, which includes a 48 kW solar PV array, is connected to its own 100 kW/100 kWh battery located in a cargo container adjacent to the parking structure. The solar energy either directly, or through the battery, provides power to 20 EV charging stations, allowing charging during periods of peak electricity use while minimizing impacts to the grid. During the experiment period these charging stations were free to users, and their popularity grew over time. Although final results have not been tallied, it appears that for much of the experiment period the system produced as much energy as was used for EV charging. By the end, however, the allure of free 24-hour per day charging tipped the system into being a net energy consumer. At the conclusion of the project, the charging stations will be converted to a pay system and may again demonstrate ZNE usage. Next Generation Distribution System Four sub-projects focused on advanced capabilities for the distribution grid. Sub-project 4 piloted distribution volt/VAR control. Volt/VAR control regulates customer voltage through centralized capacitor control technology. Engineers were able to realize reduction in voltage and customer energy use of between 1 and 4 percent. Implementation across SCE’s distribution system could save hundreds of millions of dollars in customer energy costs over the next decade. Following this successful pilot, volt/VAR control will be rolled out on SCE’s distribution system over the next few years. Sub-project 5 reconfigured two distribution circuits into a closed loop, rather than the traditional radial configuration. This allows engineers to identify and isolate circuit faults without interrupting power to all customers, while also reducing outage length. Two sub-projects used a large energy storage battery (2 MW/500 kWh) connected directly to a distribution circuit. In sub-project 3, this battery was able to reduce distribution circuit loading near the substation by discharging energy during peak times. Sub-project 6 used the battery to provide circuit-wide changes in load in order to simulate distributed energy resources (DER) participating in a demand response event. This experiment used phasor measurements and an analysis algorithm to detect the number of megawatts of demand response on the circuit. Such a capability could potentially be useful for supporting market participation of DER. Interoperability and Cybersecurity Interoperability and cybersecurity are foundational elements for the development of smart grid capabilities. As the electric grid evolves, it will include an increasing number of distributed and interconnected grid devices, both utility and customer-owned. Interoperability helps these devices work together seamlessly and efficiently. The grid and our customers also need to be protected from increasing and persistent cybersecurity threats. Two sub-projects comprise this domain: Secure Energy Network (SENet), which looked at cybersecurity, and Substation Automation 3, a pilot project that used international standards to provide interoperability for substation components. For ISGD, the SENet provided a secure communications and data handling system based on centralized cybersecurity services that extended all the way down to the radios operating within the project homes and the energy storage devices located on distribution circuits. This system integrated multiple data sources so grid operators and engineers could better understand what was happening on the grid and respond appropriately. As part of ISGD, data was collected from a number of sources including circuit monitors, residential energy storage, solar PV, thermostats, distribution automation equipment and substation automation systems. To collect this information and make it operational, SENet interfaced with multiple communications protocols including ZigBee, smart meter systems, private mesh networks and cellular data.. Substation Automation 3 was demonstrated at MacArthur substation in Newport Beach, California. The goal was to transition to standards-based communications, automated control and enhanced protection design. The design incorporated IEC 61850-compliant software and hardware from multiple vendors. The IEC 61850 standard provides an internationally recognized method of communications for substation equipment providing protection, monitoring and control. This standard also provides simplified system configuration and integration. New testing methods using simulation and hardware-in-the-loop systems allowed this substation automation system to be implemented in record time with minimal problems. To date, the system has been operating so well that it has been made the standard for SCE’s future substation automation implementations. Workforce of the Future The grid of the future will require a workforce skilled at understanding new, information-driven technologies. In the project’s fourth domain, ISGD components were examined against the current infrastructure to identify the workforce skills necessary to properly install, operate and maintain a new generation of grid technologies. Training best practices suggested by the ISGD workforce development team include: engaging the workers and their supervisors early on in the design process; building awareness among the stakeholders; involving the stakeholders in the technology development/deployments; conducting training sessions that allow participants to touch and feel the technologies; and providing easy access to on-demand training materials for workers. Meeting the Future ISGD has already led to the implementation of new systems, such as distribution volt/VAR control and Substation Automation 3, and identified areas for further research and demonstration projects. Some of these research and demonstration projects, such as volt/VAR control using DER, coordinated operations of several DERs to manage circuit loading and advanced distribution circuit reconfiguration device development and operations are being undertaken with California Electric Program Investment Charge (EPIC) dollars, and others are being funded internally by SCE. As part of the Distribution Resources Plan filed with the California Public Utilities Commission in July, SCE laid out plans for the Integrated Grid Project. The project will evaluate how technologies like the smart meter and volt/VAR control could help mitigate problems associated with DER implementation. The Integrated Grid Project will demonstrate both the controls to manage and operate DERs and the ability to optimally serve an integrated distribution system safely, reliably and affordably while determining the value of DERs as grid assets. SCE has been sharing the lessons it has learned through its participation in ISGD with industry stakeholders, including vendors, standards development organizations, academic and industry research organizations, regulators and other utilities. The final technical report will be available on the DOE website, smartgrid.gov, in early 2016. The lessons learned from ISGD will likely influence the direction of new products and guide the development of better standards. Bob Yinger, P.E., is a consulting engineer in advanced technology at Southern California Edison, Jude Schneider is a senior project manager of education and outreach in advanced technology at Southern California Edison, Kevin Clampitt is a consultant with Pandora Consulting Associates and Megan Remillard is a student at Kenyon College and former intern at Southern California Edison.
https://www.power-grid.com/solar/the-irvine-smart-grid-demonstration-leads-the-way-to-modernizing-california-s-electric-grid/
Category: Documents 0 download Embed Size (px) DESCRIPTIONFTTH Conference 2013 TRANSCRIPT - European Utilities Telecom Council (EUTC) Smartgrid and the role of FTTH FTTH Conference 2013, 20th February, Excel London - European Utilities Telecom Council (EUTC): Formed in October 2004 as a European Arm of UTC in USA. A Utility Trade Association focusing on telecommunications and ICT as needed to support core utility businesses - electricity, gas and water. Originally five members providing a small budget to allow recruitment of European resources - October 2004: 5 charter membersMarch 2012: 24 Charter members (2 from Africa) 2 Regular members 9 Associate members In 2012 2 more charter members 1 more associate - Europes Vision for Smartgrid 2005/06The European Vision: 20% energy from renewable sources 20% reduction in CO2 20% reduction in overall energy consumptionSmartgrid Task Force established with working groups for energy production, energy networks (TSO & DSO), retail of energy supply and manufacturingEveryone stated telecoms & ICT would be required, but what and how? - EUTC identified a crucial role in supporting the development and creationof the future smartgrids: All SGTF working groups recognised that more ICT would be needed but they had not involved ICT providers, vendors or the utility ICT experts. ICT seriously under represented for two years, EUTC worked with DG Information Society to raise the importance and issues in providing ICT to support smart metering and smart infrastructure networks Invited to participate in Workshops and Expert Groups by DG Energy and DGINFSO SGTF Steering Committee, Expert Group 2, 3 & 4 Also, funded projects supporting consortia examining ICT solutions. - 1. cept SS as the Center of the Solution Smart Grid @ Iberdrola Consumers and Distribution Network Management CentreRetailers Information AMM System SCADA Smart Metering Smart Infrastructure MV and LV Supervision Remote readings (energy + power) Assets Alarms. Quality of supply parameters Fault detection (no trial & error method) Remote tariffs programming Automation and MV Grid real time control. Remote Connect/Disconnect Customer care improvement LV Grid MV Grid HV Grid Meters Growing number of Substations Distributed Secondary Substations ( x 106 ) Wind & PV connections (Automated)( x 102 ) Generation & EVs ( x 104 ) 6 - ICT in Distribution Network 283 ~100% s/s SCADA 132kV Primary Network 583 ~100% ~30-60% 66kv & 33kV s/s 90000 ~50% ~5% 11kV s/s AUTOMATION 400V 4m+ ~0%~100% Consumers SMART METERS CUSTOMERS !! - Smartmetering: A progression from Automatic MR? With two way communications it will: Promote reduction in use of energy Provide for time of use tariffs Allow demand side management Support distributed generationSmartmetering does not need the Smart Infrastructure but SmartInfrastructure needs information from the customer (Meter?) - Smartmetering Technologies & Solutions: Many technologies can be used, both fixed and wireless Public networks e.g., GSM, FTTH where available, satellite Utility private networks, PowerLine Comms, long range radio, mesh radio,But cost is a major concern, customer to pay?: Future proof, utilities expect 10-15 years life from telecom assets Plug and playDSOs are increasingly looking at the combined business case for Smartmetering and Smart Infrastructure - Mapping of FutureRequirements to DSOModel Teleprotection CCTV DMS and SCADA Mobile Workforce Operational Voice Enterprise Data Distribution Automation Enterprise Voice Demand Response Retail Energy Management Communication with Microgrid AMI Page 10 - Smart Infrastructure: Intelligence from many asset points in the energy network It is necessary because: Distributed generation connects to the energy network at all voltage layers creating potential instability in network Regulators and customers demand improved network performance Energy consumption will continue to rise ( EVs, Heat Pumps) and improvement in asset utilisation is a key factor - Smart Infrastructure Technologies & Solutions: Many technologies can be used, both fixed and wireless Public networks e.g., GSM, 3G, satellite Utility private networks, Optical fibre, PowerLine Comms, long range radio, mesh radio,But cost is a major concern, the utility to bear costs: Future proof?Utility wants control, may use managed services, but will probably retain the asset ownershipThese solutions will be Mission Critical Systems must continue to operate when the power is off - In Summary: Utilities will make huge investment in energy networks and telecom services over next 20-30years Demand for technology will increase, but no single solution fits all, utility will mix and match using what best fits their needs FTTH has a role to play in SM FTTH rollout can be enhanced by using utility investment in fibre for the smart infrastructure In Europe The European Commission is promoting sharing of utility telecom infrastructure in support of all broadband solutions.
https://vdocuments.net/peter-moray.html
OMNETRIC Group brings together the best of two worlds: Siemens’ leading energy technology product portfolio with Accenture’s systems integration, consulting and managed services capabilities to support clients with innovative solutions wherever they may be on their path to a smarter grid. With the agility of a start-up and market might of its shareholders, OMNETRIC Group offers an entrepreneurial culture, fostering a collaborative and innovative work environment. As part of our team, you’ll work across a broad range of projects with other seasoned experts in the utilities and energy sectors. Balancing demand and supply Balancing demand and supply by intelligently forecasting, influencing and optimizing power consumption, renewable generation, and use of available resources. The stability of the grid depends on utilities’ ability to balance demand and supply in real time. Changes in consumption, the integration of renewables, and more sophisticated control technologies mean utilities can now more actively manage demand. Active load management unifies traditionally siloed utility systems, leveraging information within the transmission and distribution networks to forecast, influence, and optimize grid conditions in near-real time. OMNETRIC Group facilitates active load management via a distributed energy resource management system helping utilities leverage existing infrastructure and integrate renewables, back office IT/OT systems, and consumer demand, to dynamically balance demand and supply. We also help link these systems to energy markets to influence and optimize power consumption, allowing utilities to see true financial benefits. 2595 BN Den Haag,
https://www.energiekaart.net/organisatie/omnetric-bv/
The US Dept of Energy is charged under the Energy Independence and Security Act of 2007 (EISA 2007) with modernizing the nation’s electricity grid to improve its reliability and efficiency. The act mandates modernization of the electricity grid policy of the United States to support effective, efficient, and reliable upgrading of the nation’s electricity transmission and distribution systems to maintain a secure electricity infrastructure than can meet future demand growth and achieve the ultimate goals that define a Smart Grid (Title XIII Sec 1301). This Smart Grid will become the main platform for the nation’s future energy grid. It will be the backbone for the heart of power nationwide. This Smart Grid must ensure resilience, identify and prevent cyber attacks, and incorporate innovations and controls to provide affordable, safe, reliable power for all citizens. Reaching these goals requires new business models, regulatory models, and new responsibilities, as well as obligations, for grid operators, consumers, and new providers who will all help develop further innovative solutions. The details of the elements of Title XIII of the EISA 2007 are as follows: The American Council for an Energy-Efficient Economy (ACEEE) defines the Smart Grid as an umbrella concept describing electricity transmission and distribution systems that employ a full array of advanced electronic metering, communications, and control technologies. These technologies provide detailed feed-back to customers and system operators on energy use and allow precise control of the entire energy flow in the nation’s grid. Distribution networks and consumers will gradually switch from being passive managers and receivers, to active managers and empowered, engaged consumers. The changes the Smart Grid brings will affect everyone. Electricity will no longer merely be provided by professional energy suppliers, it will be controlled by end users. These end users will be connected to distribution networks which will replace simple electric reception through connection to transmission lines. Ultimately, state and local projects will be absorbed into the functional elements of the Smart Grid with emphasis on interoperability and cyber security. In the absence of standards, the development of the Smart Grid technologies may produce diverse technological investments that will become prematurely obsolete or be implemented without adequate security measures. Therefore, the National Institute of Standards (NIST) has developed a series of standards that form the roadmap and framework to support state efforts in modernizing the nation’s electricity grid. Interoperability is one of the key objectives in these standards. SunView LED and APANET Green Technology Systems are compliant with these standards. There are several important reasons for the need to develop a national Smart Grid. The nation’s current electricity grid is not equipped to meet the collective demands of current or future needs with the efficiency required to maintain citizen comfort and national security. Some studies claim that the present electricity generation and transmission system of the United States is ineffectual and wastes approximately two-thirds of the energy used to meet national electricity demands. With the current inefficient and often unreliable electricity system, the national economy loses approximately $250 billion annually. Outages alone cost America $150 billion each year. The price for electricity is rising steadily and in ten years it is predicted to increase over 30% of its present cost. When the rate increases, so does the cost of the losses. Globally, utility fraud is second only to credit card fraud and costs about $85 billion worldwide. Billions of dollars are stolen from national grids, because it is easy to do and difficult to detect. The United States loses $200 billion in electricity loss and theft due to inefficient monitoring and the arduous efforts required to pinpoint exact cause of loss. Overlooked transformers, illegal by-passes, and metering errors, coupled with aging technological equipment, contribute to this inefficient loss. Brownouts and blackouts occur due to the slow reset time of mechanical switches, lack of automated analytics, poor system visibility, and a lack of situational awareness on the part of grid operators. These outages move beyond simply waiting for lights to turn back on. Industrial production plants stop. Perishable food spoils. Traffic lights and credit card transactions become inoperable. These forms of outages cost American businesses on the average of $100 billion yearly. Anyone who has experienced a lengthy electricity outage due to a natural disaster understands this inconvenience, discomfort, and fear that results from an entire system breakdown. During recent national weather disasters, the fortunate, who still had homes, sat in the cold, wet, and dark waiting for the power to come back. Some had the comfort of kerosene generators as they waited. Teams of professional electricians were summoned from far away states to assist in finding and repairing the cause of the outages. We all remember their tired expressions of frustration as they toiled endless hours over massive lines searching for the source of the power damage in inhospitable weather. With the Smart Grid, malfunctions are noted immediately and locations pinpointed exactly. No time and expense is wasted. Our current electric generation system annually produces 4.03 million tons of sulfur dioxide (SO2) and 2.1 million tons of sulfur oxide (NOx) which is transferred into our environment. These, coupled with other pollutants, add $125 billion to annual healthcare costs, cause 18,000 premature deaths, 27,000 cases of bronchitis, and 240,000 cases of respiratory distress. The noxious effects of rampant air pollution create approximately 2.3 million lost days of work nationwide due to illness. Adding to these dismal statistics are findings by the US Environmental Protection Agency (EPA). They state that nationwide there are 200,000 premature deaths per year due to combustion emissions especially from changes in particulate matter concentrations, and 10,000 deaths occur per year due to changes in ozone concentration. From economic to environmental warnings, the development and implementation of the Smart Grid is critical to America. The Smart Grid technology will make clearly visible what has been up to now an invisible power producing and delivery network. It will improve the ability to predict overload and avoid outages by distribution methods that include renewable, non-renewable, and distributed energy resources (DER). These systems include natural gas fueled generation, combined heat and power plants (CHP), electricity storage, solar photovoltaic (PV), solar-thermal energy, wind energy, hydropower, geothermal energy, biomass energy, fuel cells, municipal solid waste, waste coal, coal-mine methane, and other forms of distributed generation (DG). In using megabytes of data to move megawatts of electricity, the delivery of electricity will be more reliable, efficient, and affordable. This process will create an electric system for the United States that will move from a centralized producer controlled network to less centralization and a more proactive consumer response network. The Smart Grid will empower consumers to participate and choose using a public two-way communication between utilities and consumers. This will enable consumers to accurately view the electricity they use, when they use it, and how much that use costs. Through a sort of social behavior modification, consumers will be able to self-manage their own electricity use by investing in intelligent, energy-saving end-user devices or selling energy back to the utility company as excess stored energy in exchange for discounts, rebates, incentives, or revenue. This social behavior modification applies to utilities as well. Due to proactive customer participation in electric consumption, utilities will be able to use consumer demand as another alternative to alleviating the need to search for additional power generation. For the first time, residential customers will be on the same playing field and have the same discount options and demand responses presently offered to commercial and manufacturing customers. Studies have been made that report over the past twenty years if the Smart Grid already was in place, the nation would have saved from $46 billion to $117 billion dollars by not constructing obsolete power plants, inefficient transmission lines, and ineffective sub-stations. The goal of the Smart Grid is to reduce utility costs, maximize efficiency system-wide, and prevent outages from natural, human actions, and cyber attacks. What will matter most to the consumer is effective delivery of electricity at an affordable cost. This is the realm of dynamic pricing which reflects hourly variations in retail power costs and gives consumers timely information to choose low cost hours of use. Consumers will be able to refuse to use or reduce their use during peak electric use hours. Demand responses will be created to allow all electric consumers from industry to residential to use energy in a rational manner by cutting energy use at peak times or when power reliability is at risk. Advanced Metering Infrastructure (AMI) will provide real time monitoring of power usage to consistently inform all consumers of their use and options. Distributive energy generation will allow customers of all scopes to use the generation of energy on their premises to offset their consumption costs by actually turning meters backward when they generate more electricity than they have demanded or simply providing them a credit for the excess energy in their next bill cycle. For the reduction of toxic carbon, the Smart Grid’s ranks the potential in providing cost effective clean energy using plug-in electric vehicles (PEVs), including plug-in hybrid electric vehicle (PHEVs), as the main response to this environmental threat. Although the vehicles by themselves will not produce the savings, the Smart Grid technology will allow them to generate their fundamental potential. The present idle production capacity of the nation’s electric grid could supply 73% of the energy needs of the vehicles on the road with existing power plants. Integrating that idle production would put that power back into the national grid. The use of electric vehicles would reduce 52% of net oil imports or about 6.7 million barrels daily, reduce CO2 emissions by 27%, and cut Greenhouse Gas Emission (GHG). To achieve this goal, vehicle charging must be done during off-peak hours. This peak time considerations will apply to electronically controlled appliances including ranges, dishwashers, refrigerators, microwaves, washers, and dryers. The Smart Grid will allow remote control of these devices using compatible global interoperable standards to transmit signals to and receive signals from devices while away from home. The benefits to consumers include their ability to make choices that save money, improve their personalize energy convenience, and impact the environment in a positive way. Up to the present time and for most of us, energy use has been a passive purchase, unclear in exact cost, and confusing to consider. We receive bills. We pay them and hope there is not an outage especially in extreme weather conditions. Controlling the consumption, distribution, and generation of electricity by using the technologies of the Smart Grid will contribute to national and global environmental protection. If we choose to do nothing, polluting emissions will rise, electric rates will increase substantially, consumers will be forced to pay excessively higher rates, they will have no choice or options, and brown/black outs will become a norm. This is not a future option for America. When the nation implements fully the Smart Grid, it will change and hopefully enhance every aspect of the electric delivery system from generation, to transmission, to distribution, to storage. This implementation will create utility initiatives that will encourage and provoke consumers into new patterns of electricity usage. The modernization to the Smart Grid is central to national efforts to improve and increase the reliability of energy efficiency, transition to renewable sources for energy use, reduce greenhouse and carbon pollutants, and provide a sustainable, comfortable, safe environment for future generations. The Smart Grid will have requisite levels of interoperable standards that will enable innovative changes, some yet unknown. This interoperable system will exchange meaningful actionable information in a safe, efficient, and reliable manner. This System of systems will provide information sharing with flexibility, fidelity, and security to allow our nation to prosper and perform into the future. For more detailed information, please read and/or download our Smart Technology Brochure, our Interoperability Brochure, and our SunView LED Company Overview Brochure found under the Virtual Brochures Tab on this website.
https://sunviewled.com/smart-lighting/smart-grid/
April 5, 2012 - 'Interacting with Energy' is a play on words at several levels. Our daily interaction manipulating energy must be done with a lot of personal energy, but now involves interaction with our energy source the smart grid. In addition the tools of our industry such as wireless sensors are now starting to use energy harvesting to interact with energy on a micro scale and if this is not enough interaction, we are now starting to interact with the building envelope, fenestration, onsite energy generation with renewables and the list goes on. Please join me in our interaction with energy on so many levels. By Ken Sinclair Extracted from The Future of Enterprise Energy Management Mike Putich, director, Climatec – Building Technologies Group It is not unusual to walk into a building engineering office today and see four to six disparate building systems; all running on their own computers and networks, performing related building functions independently, with none of them able to leverage the combined information to run the building efficiently or holistically. It is not uncommon for a commercial building to have separate systems to control the HVAC building automation system (BAS), Tenant Activity (card access, after hour billing system), Lighting Control, Fire & Life Safety, Video surveillance, and Work Order Management. Today building owners, managers and operators are being asked to improve the performance of their assets by lowering operating costs, improving tenant satisfaction and implementing sustainability efforts while being good stewards of the environment. They are being asked to do this in an economic climate that offers limited, or no access to capital for improvements, and with limited staff and systems capabilities. From Brad’s article comes the necessity to do all this continuously: Continuous Optimization Brad White, principal, SES Consulting Inc. Why Continuous? Anyone with experience in building automation knows that, from time to time, your system needs a tune up. Building performance declines over time and, as a consequence, energy use increases. Poor performance can be the result of deficiencies in the original commissioning, broken or miss-calibrated sensors, conflicting set points, or manually overridden equipment to name a few common problems. The standard response to these issues is to embark on a retro-commissioning (RCx) of building systems; identifying and fixing all the issues that have arisen over time. Although RCx can be very effective at reducing energy consumption in a building, the persistence of savings can be poor. In a few years you’re likely to find many of the same problems and poor performance that existed before. Breaking out of this cycle is the motivation for Continuous Optimization. Jack McGowan, industry sage and personal friend, adds the following perspective: Energy 2.0 Jack McGowan, president, Energy Control Inc. Open Automated Demand Response (OpenADR) is an important, emerging standard for implementing demand response for commercial, industrial and residential customers. Backed by an impressive list of leading utilities, ISOs and suppliers, the OpenADR 2.0 standard will play an important role in grid optimization.) The importance of Smart Grid to buildings has been completely lost for most owners, except those getting paid to participate in Demand Response. Yet Smart Grid, in a broader context, represents an opportunity for new building revenue streams, allowing them to become virtual power plants and energy profit centers. It is about transforming the electricity business model to unlock capital and operating cost benefits for building owners. Energy efficiency and green buildings, along with their respective benefits, have become second nature to facility professionals in last decade, but Demand Response, and ultimately Smart Grid, can unleash even more benefits. What should buildings owners know; what is the difference between Demand Response and Smart Grid, why is it happening, how can buildings benefit and what does it cost to play? Each question is answered here. This picture depicts some Smart Grid basics. By definition, a Smart Grid is an interconnected system of information and communication technologies, and electricity generation, transmission, distribution and end use technologies which will enable consumers, in this case: building owners, to manage their usage and chose the most economically efficient offering, while maintaining delivery system reliability and stability enhanced by automation and environmentally optimal generation alternatives including renewable generation and energy storage. That is a bit of a mouthful, though it gets to the heart of what is underway, but why? The best way to explain this is to start with a question; what would happen if Alexander Graham Bell and Thomas Edison came back to life tomorrow and observed the industries they were instrumental in creating? If Bell was handed an iPhone™ and asked to make a call, he would not know how to do it. Edison on the other hand would be able to explain, in fairly technical detail, how every aspect of today’s electric system works! It has not changed in ~100 years! Bob Galvin, former Chairman of Motorola and founder of the Galvin Electricity Initiative (Galvin), puts it another way. Mr. Galvin was instrumental in starting the cell phone industry and he compares electricity today to telecom in the early 1980’s, a monopoly business model, pent up need for innovation, and no way to unleash entrepreneurial business models. Speaking of business models, Demand Response (DR) and the OpenADR standard represent a near-term killer app in the buildings space. I know, the term killer app has been overused, but getting paid to implement a control strategy is pretty exciting. For those who are new to this topic, the OpenADR standards was developed at Berkley Labs and it is in the vanguard of initial Smart Grid Standards that should be mandated by the Federal Energy Regulatory Commission. I believe it is important that we watch evolving trends in the USA to better understand the necessity of a smart grid change in Canada. “The Green Button” Dr Martin Burns, SGIP Administrator Team Dynamic pricing is coming. Many of you are already under some kind of time of use billing basis with your power provider. Energy management, in its essence, is trading off the comfort of the occupants for the cost of energy. With dynamic pricing, a time-relative determinant of pricing will now become part of the calculus of how to make these trade-offs. The building controls industry has long relied on sub-optimal means of acquiring the needed actual energy usage of a facility for this purpose. Typically pulse meters are tapped to gain at least some measure of usage. Some utilities have begun to make available detailed usage from time of use meters on various bases. Enter the Green Button. This initiative (CLICK HERE to see related news release) establishes a standardized format for the exchange of measurement data within the home, commercial, and industrial facilities. The Green Button originated with a White House “call to action”. However, Green Button is the result of a remarkable process of voluntary industry collaboration and adoption among stakeholders with only the facilitation by the White House Office of Science and Technology Policy (OSTP), the Department of Energy (DOE), and the National Institute of Standards (NIST) and their creation the Smart Grid Interoperability Panel (SGIP). The latter is a group of some several hundred stakeholder organizations that have come together to coordinate the development and deployment of standards on behalf of the Smart Grid. The Green Button is three things primarily: • It is a US government Policy initiative designed to inspire an ecosystem in the generation and consumption of Energy Usage Information (EUI) in the marketplace, • It is a “Brand” that allows a recognition of the availability of this ecosystem and what is for and can do for the consumer, • And, last but not least it is a collection of technologies that allow for the implementation of interoperating products and services. This includes standards, testing and certification, and reference open-source implementations. In yet another article “Effective Daylight Management through Building Automation” reminds us that in all this energy interaction we must not forget the windows. As you can see by all of the above your future will depend on you personal interaction with energy. Ken Sinclair is the publisher of AutomatedBuildings.com and can be reached at [email protected].
https://www.energy-manager.ca/column-interacting-with-energy-1367/
The impacts of Brexit from the European Union on the many aspects of life is under the spotlight, writes Jonathan Spencer Jones, content analyst at Engerati.As far as the energy sector goes, Engerati doesn’t see any major shift in energy policy. [Brexit and Britain's energy] As a signatory to COP21, the conditions dictate the general terms of policy in terms of decarbonisation and renewables. There will obviously be a change of relation with the Energy Union, but even this is unlikely to be too great: For example, interconnections are likely to continue for energy trading and security purposes. Potentially the biggest impact is likely to be on R&D and strong advocacy will be needed to ensure that funding levels are maintained. Grid access in Africa [quote] Over 600 million people in Africa currently have no access to a reliable source of grid electricity and many experts in the energy sector believe that the microgrid will be be able to change that. In an interview at African Utility Week, Tony Duarte, Microgrid Manager at ABB Microgrids, discusses how microgrids could be rapidly deployed to bring sources of generation together, balance them and supply rural areas. Putting its money where its mouth is, ABB has commissioned an integrated solar-diesel microgrid installation at its 96,000 square metre Longmeadow facility in Johannesburg, South Africa. The microgrid provides both grid connected and off-grid functionalities to maximize the use of renewable energy and ensure uninterrupted power supply in the case of outages on the main grid supply. [Innovative microgrid solutions help address a real-world challenge] Energy mix in the North Sea Traditionally oil and gas exploration operators and energy operators have little in common. But now in what is said to be a “unique collaboration”, the two sets of parties, along with others, have signed a manifesto to cooperate in the North Sea region. The two primary focuses for North Sea energy development are ‘smart combinations’, in which new technologies such as power-to-gas and hydrogen storage and carbon capture and storage are brought in using decommissioned oil and gas wells, platforms and pipelines that otherwise would have to be removed, and the development of an interconnected North Seas grid. [The North Sea – Gas and wind talk synergies] Reliance on fossil fuels in Asia-Pacific Fast growing energy demand in Asia-Pacific is raising concerns that the region could become unsustainably reliant on fossil fuels. A new report from the Asia-Pacific Energy Research Centre forecasts a 35% jump in energy consumption in the region by 2040, driven by continued poverty reduction and growth of the middle class in emerging APEC economies. However, 80% of demand will be met by fossil fuels, led by coal, based on current trends. Moreover, more than 10% of the energy reserves will need to be imported from outside the region. [Asia-Pacific’s growth will lead to unsustainable energy] APEC is putting in place actions to accelerate sustainable solutions, such as a new network for city executives to share intelligence on efficiency and renewable energy policy development and adoption, drawing on the APEC Low Carbon Model Towns initiative.
https://www.smart-energy.com/regional-news/africa-middle-east/engerati-brexit-grid-north-sea-energy/
Our Client has over 30 years' experience assisting governments, communities, and the private sector. As a non-for-profit, independent org based out of USA has over 5,000 employees with global experience implementing international development projects in energy and other disciplines. Around the globe, the org is working to develop and implement low-emissions energy technology alternatives, raise private financing for energy infrastructure, bring efficiency to utilities, work with governments on legal and regulatory reforms to alleviate bottlenecks in energy generation and consumption, and increase access to green energy to sustain economic growth and reduce poverty. We are hiring Smartgrid Specialist on the funded Task Order Technical Support for South Asia Regional Energy Partnership (SAREP) activity. This position will be based in New Delhi, India Responsibilities of the Smartgrid Specialist include, but are not limited to : - Business optimization by reducing utilities operating costs through asset management, improve in revenues through new customer services. - Support integration of intermittent renewable energy by combined use of grid automation capabilities,data analytics etc. - Enable and innovate customer solutions and improve customer experience through interventions such as AMI data and demand side infrastructure including distributed generation and electric vehicles. - Apply state-of-the-art technologies like AI and blockchain to improve internal processes and business performance and generate new revenue from data-driven services - Recommend solutions to ensure security of all computers and information technology utilised to enable the smart grid capabilities - Advise on grid automation by increasing capabilities of existing SCADA, substation control and monitoring and implementation of ADMS. - Preventing and proactive maintenance of distribution assets by developing asset health centre, developing new applications for distribution sector such as drone support etc. - Make a case to utilities and regulators for pilot projects and or commercial deployment based on evidence and technical review. - Design specifications for adoption and testing of smartgrid technologies/products - Prepare bidding frameworks, tender documents, and support bidding processes for pilotprojects/commercial deployment. - Prepare and author reports, research papers, articles related to smartgrids, advanced technologies for power sector, IT/OT convergence, IoT etc. - Conduct trainings and capacity development workshops for utilities, regulators, etc. Qualifications : - Master's degree in Electrical/Electronics/Computer Science/Instrumentation Engineering and 6 years of relevant experience and bachelor's degree in Electrical Engineering and 8 years of experience. - Experience of developing/deploying/testing smartgrid technologies is highly desirable. - Excellent interpersonal, oral and written communication skills; including the ability to establish and maintain effective working relationships with co-workers, supervisor, technical staff and clients. - Ability to collaborate effectively with diverse team members. - Motivated and able to work under challenging and fast paced environment. - Demonstrated networked amongst smartgrid technology providers and technical institutions. - Demonstrated network amongst South Asian energy sector particularly utilities and regulators. - Good understanding relevant policies and regulations for smartgrids, transmission and distribution sector - Ability to effectively prioritize workload, self-manage and complete tasks to best meet project needs.
https://www.engineeristic.com/j/smart-grid-specialist-non-profit-firm-8-10-yrs-31839.html?ref=cl