content
stringlengths 71
484k
| url
stringlengths 13
5.97k
|
---|---|
Story No.:
4071910
Restrictions:
Duration:00:02:21:00
Source:
AP TELEVISION
Dateline: Karachi - 18 December 2016
Date: 12/18/2016 02:07 PM
Karachi - 18 December 2016
1. Various of a local cinema in Karachi, with billboards of Pakistani film and Star Wars, an American film
2. Tilt up of gate of cinema with people entering, and then showing the billboards of films being screened
3. Various of people buying ticket for the film
4. Various of people looking at posters of films inside the cinema
5. SOUNDBITE (English) Sami Sani, Actor:
"People want to watch Indian movies because somehow our culture matches. Somehow, things, or whatever they are trying to do, we do here also. So I don't want to feel ashamed while watching Indian movie, or that would mean I am not patriotic. No, I'm a Pakistani and I'm a pure patriot. But, definitely, I myself want to watch good movies, my family wants to watch good movies, we want good entertainment. So if I'm getting entertainment from Indian movie, why shouldn't I go and watch an Indian movie?"
6. Sami Sani getting photographs with fans
7. SOUNDBITE (Urdu) Javed Iqbal, Film enthusiast:
"I believe that Indian films shouldn't be screened in Pakistan. As patriotic Pakistanis, we should be able to distinguish friend from foe. Don't Indian movies show anti-Pakistani content? Their dramas also show anti-Pakistani stuff. How could we accept India as our friend? I'm totally against Indian movies and dramas being shown here, and I'd call for a complete boycott."
8. Various of local cinema with very few patrons
Cinemas in Pakistan will resume the screening of Indian movies from Monday, following a self-imposed ban.
Representatives of major cinema houses said that they had only suspended the screening of Indian films but had not completely banned them.
They added that the resumption is important to ensure cinemas remain financially viable.
The first film to be screened will be actor Nawazuddin Siddiqui's Freaky Ali.
On September 30, Pakistani cinemas' owners announced their decision to indefinitely cease the screening of all Indian films as a protest against the ban of Pakistani artists in India and to show solidarity with the armed forces of Pakistan.
The response to the decision of cinema owners was mixed. Filmmakers and actors welcomed the decision, saying that art transcends boundaries.
Indian movies returned to Pakistani cinema houses in 2008 after a 43-year-long hiatus imposed during the 1965 war. | http://www.aparchive.com/metadata/Any/e23f421fb58c6bd956b468fadbf81bbf |
Home Movie Weekend: Home Movie Screenings
Amateur home movies and works by artists who incorporate and transform home movies will be presented in this eclectic, lively program in the Bartos Screening Room. Highlights from the Saturday program will be shown, along with a curated program of artist works. Details to be announced.
Presented by the Queens Library, Queens Memory Project, and the Museum of the Moving Image. | https://www.nycarchivists.org/event-3094694 |
AMMAN — The fifth edition of the Women’s Film Week opens on Wednesday on the occasion of the International Women’s Day, featuring seven movies from seven countries, according to the Royal Film Commission (RFC).
The seven movies will be screened at the Rainbow Theatre on Jabal Amman’s Rainbow Street, with a special focus on women and women’s empowerment in sports, politics and other fields, Marian Nakho, the RFC media and communication coordinator, said.
“UN Women has arranged for the screening of these movies in Amman in coordination with several embassies. These movies — including documentary or narrative features — have either won awards or participated in global film festivals,” she told The Jordan Times on Tuesday.
Nakho noted that the movies selected to be screened are not only women-centred, but are also directed by women.
“The RFC is a responsible partner in any cinematic event especially one with a theme like this,” the RFC official added.
The four-day event will commence with the screening of “Eufrosina’s Revolution”.
The documentary portrays the personal journey and social awakening of Eufrosina Cruz Mendoza, a young indigenous woman from Mexico fighting gender inequality, according to an RFC statement.
Eufrosina was denied the right to be mayor of her community only because she was a woman; this led her to fight for gender equality in indigenous communities, challenging the political leaders in the area, the statement said.
The film week wraps up on Saturday with a screening of the Spanish narrative feature “The Milk of Sorrow” and the Japanese documentary “Noriben - The Recipe for Fortune”.
More information on the event is available on the RFC’s website via the link (http://www.film.jo/Photos/Files/7637f869-79c5-4a42-b4ee-78a6291f8f3a.pdf).
Related Articles
AMMAN — The Royal Film Commission (RFC) on Monday will start screening a series of award-winning documentary films which confront
AMMAN — Children in Jordan will have the chance to learn more about cinematic culture through the “Films for Children” event which kicks off
AMMAN — The “shocking numbers” of displaced people throughout the world prompted the Royal Film Commission - Jordan (RFC) to screen the docu
Add new comment
- Popular
- Rated
- Commented
Opinion
Newsletter
Get top stories and blog posts emailed to you each day. | http://www.jordantimes.com/news/local/women%E2%80%99s-film-week-opens-today |
Time Schedule:
Sudhir Mahadevan
C LIT 315
Seattle Campus
Examines the cinema of a particular national, ethnic or cultural group, with films typically shown in the original language with subtitles. Topics reflect themes and trends in the national cinema being studied.
Class description
PLEASE NOTE THAT SCREENING SESSIONS FOR THIS CLASS MAY BE LONGER BECAUSE OF LENGTH OF FILMS. STUDENTS HAVE THE OPTION OF WATCHING THE MOVIES ON THEIR OWN.
Introduction to Indian cinema covering a range of movies: from recent independent movies to big budget Bollywood blockbusters. Movies range in decade from the 1950s to the present. Students will learn the basics of film analysis and incorporate those basics into their written work. Work includes attendance, viewing all required films, keeping up with readings, short quizzes and papers, plus a longer term-end essay paper. | http://www.washington.edu/students/icd/S/complit/315sudhirm.html |
He's baaaaaaack!
"Jaws" will return to the big screen in select movie theaters, including more than 10 in the Chicago area, next month for the film's 40th anniversary.
Steven Spielberg's 1975 thriller will be shown at various AMC and Cinemark theaters June 21 and June 24 at 2 p.m. and 7 p.m.
The screening, which will include an introduction by Turner Classic Movies host Ben Mankiewicz, will be hosted by Fathom Events, Turner Classic Movies and Universal Pictures Home Entertainment.
"Jaws" was released June 20, 1975. At the time, Tribune film critic Gene Siskel wrote that "'Jaws' is a qualified success. The picture does develop a sense of fear and trembling in the audience. We are afraid of the mammoth Great White shark attacking persons who live on fictional Amity Island along the New England coast."
The movie earned $7 million in its opening weekend and went on to gross more than $470 million worldwide. The shark tale has been credited with starting the trend of the summer blockbuster. | https://www.chicagotribune.com/entertainment/ct-jaws-to-be-screened-in-chicago-theaters-for-40th-anniversary-20150529-story.html |
Urban watershed is a discrete and complex system where a diverse number of factors govern its quality and health. Soil erosion by water is the most dominant factor that determines a watershed quality, and considered as one of the most significant forms of land degradation that affects sustained productivity of land use. The principal aim of this paper is to utilise spatial-based soil erosion information to assess land suitability at a watershed level. The specific aim is three-fold: (i) to develop techniques based on a GIS in the parameterisation of a soil erosion model, which is designed for use at a large scale assessment; (ii) to assess and map the spatial distribution of average annual rate of soil losses in; (iii) to employ such related concept as soil loss tolerance to determine land suitability at a watershed level. An analytical procedure is used to analyse an urban watershed of Tallo River, in South Sulawesi, Indonesia, with a total area of 43,422 ha. The procedure is executed using RUSLE (Revised Universal Soil Loss Equation), in a GIS environment, utilising available information in the region (including climate, soil, slope, and land use and land conservation practices), and with the assistance of ground surveys. The results indicate that around 56.5% of the area experience annual soil loos of less than 1 ton/ha/year, while erosion rate of more than 25 ton/ha/year covers a total area of 8.9%. Due to a good ground cover in forested land, most of the slopping areas have actual soil losses of 1-5 ton/ha/year. This study reveals that areas categorized as high risk, where only forest cover allowed consist of 9.4%, and those with very low risk cover a total area of 5.4%. Most of the study region (around 84%) experience moderate and low erosion risk, and suitable for cropping with special management practices (CS) + perennial crops (PC) + grass (GR) + and forest (FR). This study suggests that the outputs of this modeling procedure can be used for the identification of land management units based on degradation levels, as well as the most suitable land use to be practiced on individual land units on a sustainable basis. | https://ccsenet.org/journal/index.php/mas/article/view/35833 |
Available under License Creative Commons Attribution.
1MB
Abstract
Worldwide, karst terrain is highly sensitive to human activity due to extensive areas of thin soil and rapid water flow to groundwater. In the southwest China karst region, poor farming decisions can promote land degradation and reduce water quality with negative consequences for livelihoods in a region where farmers already suffer from the highest national poverty rates. Targeting management advice to farmers through knowledge exchange and decision support can help alleviate land use impacts on the karst environment but first requires baseline knowledge of how local farming communities understand and approach soil and water management. We used a catchment-wide survey (n = 312 individuals in seven villages) to investigate differences in environmental awareness, catchment understanding, and farming practices amongst farmers and community leaders in a typical karst catchment in southwest China. Age, gender and village of residence of farmers showed an association with the type of challenges perceived to be most serious. Access to labour, issues of water quantity and/or quality affecting irrigation, and fertiliser costs were recognised as being particularly problematic for the viability of farming. Sources of information used to learn about farming practices, the environment and fertiliser use were more diverse for younger (< 40 yr old) farmers and levels of training and acquired knowledge regarding land management practices varied significantly between villages in the catchment. The identification of significant associations between villages or sample demographics, and a variety of questions designed to understand farmer attitudes and their environmental awareness, provide clearer insight upon which knowledge exchange and training programmes can be co-designed with catchment stakeholders. This has the potential to lead to improved farming practices with co-benefits for farmers and the environment; helping sustain ecosystem services for impoverished communities in fragile karst ecosystems. | http://eprints.gla.ac.uk/211035/ |
The UN Convention to Combat Desertification (UNCCD), the Turkish Ministry of Forests and Water Affairs (MoFWA) and the World Business Council for Sustainable Development (WBCSD) have formed an alliance to encourage public-private partnerships to combat land degradation in the months leading up to a key UNCCD conference.
The alliance is anticipating the UNCCD’s 12th annual Conference of the Parties (COP12), scheduled to take place in Ankara, Turkey, 12-23 October. The COP is the convention's main decision-making body, and its members plan to discuss and make decisions on the convention's implementation.
Land degradation refers to deterioration in the quality of land, its topsoil, vegetation or water resources, usually caused by excessive or inappropriate use of soil. As a result, less land can be cultivated and soil erosion is more prevalent.
UNCCD parties have adopted a goal of global Land Degradation Neutrality (LDN) and are working on developing an action plan for governments. LDN occurs when the area of productive land remains stable or increases. During the UNCCD COP12, Parties of the Convention will consider setting a target of reaching global LDN by 2030.
The WBCSD plans to establish a dialogue with companies that voluntarily commit to reversing land degradation. It aims to work with them, give them advice on their strategies and identify opportunities for them to introduce sustainable land management business practices and join large scale land rehabilitation efforts, according to Simone Quatrini, coordinator of private sector investments in land, in the UNCCD’s Global Mechanism body.
“The risks and opportunities of land degradation are often underestimated by companies,” says the WBCSD. “There is a strong business case for action, as land is an essential asset for many companies.”
Land-degradation-induced changes can have a direct impact on the cost structure and profitability of a company, notes Violaine Berger, director of the WBCSD’s Ecosystems and Landscape Management Cluster. Poor land conditions can mean fewer and lower quality raw materials, impacting price levels; less water and greater risks of landslides and floods; and local health problems and poverty, forcing workers to move on. Too much unusable land can eliminate jobs and lead to social unrest. “The risks are largely matters of stability,” adds Louise Baker, coordinator of external relations and policy for the UNCCD Secretariat.
“As competition for access to productive assets heats up, supply chains and even entire markets are likely to become more volatile. Grain, plant and animal commodities will become increasingly scarce and costly. Land and ecosystem degradation means many areas are more vulnerable to natural disasters."
The WBCSD wants to educate businesses about preventing further land degradation by implementing sustainable land management practices, Berger says. Degraded land can be used for certain cultivated crops, such as sugar cane, soy and palm oil. Ways to limit degradation include growing trees and crops at the same time, so the trees provide some shade, planting a tree each time one is cut down and being more selective about the trees that are harvested.
“We are targeting all [business] sectors, as land is an important asset for all companies, not only for the companies from the agriculture and forestry sector, but also for the ones that have indirect links to land through their value/supply chains, such as insurance, retail, mining and oil and gas companies,” says Berger.
Companies often have difficulty understanding how land degradation can affect their business, she notes. “This is simply not on their radar screen, as they do not make the connection with water scarcity, increased vulnerability to extreme weather-related events, food insecurity, biodiversity loss, or migration,” Berger says. | http://www.ethicalcorp.com/environment/ethicswatch-desertification-fighting-save-fat-land |
Land degradation is an increasingly severe global environmental and development problem. Each year an additional area of 12 million hectares of agricultural land get degraded and soil erosion amounts to an estimated 24 billion tons (3 tons per capita). The manifestations of land degradation are diverse and context-specific, but are always characterised by the degradation of soil, vegetation and/or water resources – predominantly through unsustainable forms of land use. This results in a massive loss of ecosystem services, with the reduction in agricultural productivity being a main concern. The increasing degradation of land and soils is therefore a major threat to food security and the resilience of rural communities while contributing to climate change and biodiversity loss. In total, the estimated annual costs of land degradation world-wide amount to 400 billion US dollars.
A neglected problem
Irrespective of the fundamental challenge that land degradation poses to rural development in many parts of the world, awareness among the public and policy-makers is generally low. Soil continues to be a neglected resource whose degradation usually takes place slowly and only becomes visible at a late stage. Moreover, land and soil are often perceived as private property rather than public goods, while political responsibility cuts across the agricultural and environmental ministries. Whereas the agricultural sector tends to perceive soil fertility as a mere function of input supply, the environmental sector largely reduces land degradation to land cover change, with a focus on deforestation.
At the international level, the UN Convention to Combat Desertification (UNCCD) is the only legally binding international agreement to address land and soil degradation. Despite being one of the three Rio Conventions, the actual influence of the UNCCD has been limited in the past. This may be explained by its formally restricted mandate on land degradation in drylands, i.e. desertification, and a regional focus on Africa. More importantly though, the convention lacks a clear and quantifiable target.
Awareness of land degradation is growing
However, attention has been growing over the last few years. A number of international initiatives (such as the Global Soil Week, the Food and Agriculture Organization’s Global Soil Partnership or the Economics of Land Degradation Initiative) as well as several high-ranking scientific assessments have raised the awareness of decision-makers. A particularly important step was the integration of land and soil degradation into Agenda 2030. SDG target 15.3 stipulates to “combat desertification, restore degraded land and soil, including land affected by desertification, drought and floods, and strive to achieve a land degradation-neutral world by 2030”. This target started to gain significant political momentum when the 12th Conference of the Parties to the UNCCD decided to make LDN the central objective for the convention in 2015.
What is land degradation neutrality?
Land degradation neutrality (LDN) is first of all an aspirational target. Similar to the role of the two degree target in global climate policy, LDN serves as a common overarching goal to address a global environmental problem, giving orientation to the UNCCD process and providing a joint vision for the often fragmented strategies to address land degradation. Constituting benchmarks to which countries and the international community can be held accountable for politically, targets aim to eventually spur action.
Secondly, LDN is a concept, defined as “a state whereby the amount and quality of land resources necessary to support ecosystem functions […] remain stable or increase”. Thus, LDN is achieved if there is no net loss (or a gain) of land resources compared to a baseline (e.g. 2015). Such a balance can be achieved through avoiding, reducing and reversing land degradation. The LDN response hierarchy (see Figure) underlines the need to prioritise the avoidance of land degradation while making use of the large areas of degraded land that holds potential for restoration.
Thirdly, LDN is a monitoring approach that allows for tracking progress on the implementation of LDN targets (see Box at the end of the article).
Translating a global goal into national ambitions: LDN target setting
Based on these considerations, the UNCCD has started a LDN target-setting programme that aims to bring the global goal of LDN down to the country level. Since its start in 2015, 115 countries have joined the programme with the objective to formulate voluntary national targets on reducing land degradation and rehabilitating degraded land. So far, 60 countries have set LDN targets, some with rather general goals for achieving a state of no net loss in 2030 or at an earlier or later date and others with specific quantitative targets. For example, Senegal aims to annually improve 5 per cent of the land under degradation until 2030, while Namibia has committed to reduce bush encroachment on 1.9 million hectares by 2040 (see also article "Measuring and degradation needs to be done from the ground up").
Such ambitions may overlap with other already existing ones for forest and landscape restoration (AFR 100, 20x20 etc.) or climate action (Nationally Determined Contributions). The LDN target-setting programme explicitly encourages such linkages as they provide leverage for political and financial support.
Different pathways for implementing LDN
With a growing number of countries having set LDN targets, the challenge increasingly becomes one of implementation. Bold targets need to be translated into policies and projects that actually improve the way land is managed on the ground. There are two main approaches for countries to implement LDN targets. One is policy-oriented and aims to mainstream LDN into existing land use regulation. In this respect, the LDN mitigation hierarchy (avoid, reduce, reverse) can serve as a guiding principle for land use planning. The other implementation pathway is more project-oriented aiming at the roll-out of additional programmes for sustainable land management and rehabilitation in degradation hotspots. Such strategies are more concrete and help to achieve tangible results, although they are also more limited in scope and may not address the need for policy reform. A combination of policy and project-oriented approaches is clearly best suited to successfully implement LDN targets.
Obviously, many existing programmes and projects already contribute to achieving national LDN targets, particularly those working on landscape restoration, agroforestry, watershed management, soil rehabilitation or erosion control. They should be made visible and may help to identify scalable best practices and increase the ambitiousness of LDN targets. An LDN target a country has set can be an opportunity to advance necessary reforms of policy and planning instruments as well as to foster the often lacking co-operation between agricultural and environmental authorities.
Promising examples and obstacles
Experience from countries following an LDN approach is diverse. The number of countries participating in the LDN target setting programme, now including global players such as China, India and Brazil, clearly exceeds expectations. However, the political relevance of LDN targets varies. While some countries officially endorsed their targets, others consider them to be more informal. Targets formulated by environment ministries without the participation of their agricultural counterparts are a recurrent obstacle.
LDN explicitly calls for cross-sectoral collaboration, and most targets can only be implemented by improving agricultural practices, incentives and advisory services. A promising example in this regard is Costa Rica, which already passed a directive that requires all agricultural and environmental policies and plans to integrate LDN. In Benin, the LDN process has triggered a continuous inter-ministerial dialogue on sustainable soil management, supported by a bilateral soil rehabilitation project that provides best-practice examples.
Many countries follow a pragmatic approach and use their recently set LDN targets as an argument to access additional funding for project proposals. In fact, the Global Environmental Facility (GEF) recently made LDN the cornerstone of its focal area on land degradation and significantly increased respective funding volumes. A number of GEF financed LDN projects have already been approved (e.g. in Georgia, Lebanon, Namibia and Turkey), usually combining on-the-ground implementation with activities to integrate LDN elements in policy processes. Another emerging funding opportunity is the recently launched LDN Fund that aims to mobilise private sector investments for profit-oriented projects with high environmental and social benefits.
Summing up...
Undoubtedly, LDN has given the global agenda on land degradation a new boost. It provides a joint vision and monitoring approach for the fragmented policy field of land and soil degradation and encourages a significant number of countries to put these topics higher on the political agenda. This offers an opportunity to advance action against land and soil degradation, reform respective policies and mobilise additional funding. A key aspect is the flexibility of the LDN process, which gives countries sufficient freedom to choose their targets, implementation paths and monitoring indicators according to their specific circumstances.
MONITORING LAND DEGRADATION NEUTRALITY
What is the status of land degradation and land rehabilitation worldwide? Has it improved or worsened? Where are degradation hotspots and anticipated losses located? Where should a country plan measures to avoid, reduce, or reverse land degradation?
To answer these questions, decision-makers need spatial data. Freely available satellite imagery in combination with in-situ observations provide reliable data with global coverage and high repetition rates, enabling retrospective and current analysis. Most countries, however, are not yet exploiting the potential of data for monitoring, reporting and planning purposes. Therefore, the UNCCD, together with its partners, supports all country parties by providing readily processed data to each country as well as free tools for visualising and processing their own data and by building capacities through regional workshops.
Data for reporting
In 2018, parties to the UNCCD will report for the first time on the following three agreed quantitative indicators in a uniform approach that allows the spatially explicit estimation of changes in land degradation and restoration:
• Land productivity
• Land cover change
• Soil organic carbon (SOC)
The three indicators are a minimal consensus to quantify land degradation. From a scientific point of view, more sophisticated approaches exist to estimate land degradation. For monitoring, however, an approach was needed that is (a) accepted by all country parties and (b) operational at global level. That this consensus was reached and that monitoring is now operational is a huge step forward for the Convention. Countries are explicitly invited to add country-specific indicators and national data where needed.
Their choices on data will always be a compromise between (a) global comparability and (b) national relevance to inform national decision-making on resource management and land use planning.
The same indicators used for the UNCCD reporting process will also serve for reporting on SDG 15.3 on LDN, avoiding duplication of efforts; however, some countries are still struggling aligning the two reporting processes. The required information flow between UNCCD national focal points and national statistical offices as well as the acceptance of geospatial data by statisticians can be challenging.
Data for decision-making
Besides reporting, the data will serve as a basis for policy-making by informing land use planning with the aim to optimise the location of interventions and the type of interventions. Namibia, for example, used national data on the three indicators plus bush encroachment as a country-specific indicator to inform the integrated regional land use planning (IRLUP) process. Data on bush encroachment was especially useful, while the interpretation of SOC data proved difficult. To ensure maximum exploitation of the potential of the data for planning processes, capacity building should target both reporting officers and technical staff for land use planning.
Synergies and inter-sectoral co-operation
The three indicators – especially land cover change – are also relevant for reporting and decision-support for other SDGs and international agreements and target systems such as AFR100, the Sendai framework for disaster risk reduction, or Nationally Determined Contributions (NDCs). Inter-sectoral co-operation is thus of utmost importance to maximising synergies and avoiding duplication of efforts for monitoring, assessment, planning and implementation to move towards a land degradation neutral world.
Dr Alexander Erlewein is advisor on land and soil degradation at GIZ. He previously worked for the UNCCD secretariat.
Antje Hecheltjen is advisor on land degradation and remote sensing at GIZ. She previously worked for UN-SPIDER and as a consultant for UNCCD. | https://www.rural21.com/english/a-closer-look-at/detail/article/land-degradation-neutrality-a-new-impetus-for-addressing-the-degradation-of-land-and-soils.html |
The Consolidated Rental Car (ConRAC) facility for the John Glenn Columbus International Airport (CMH) was designed to be developed in phases to accommodate the airport’s long-range plan for expanding the capacity of the airport to grow from six million to more than 10 million annual passengers over the next 25 years. The new ConRAC is located directly across the street from the future terminal and will replace an existing deficient operation with a new 1,200,000-square-foot facility with more than 2,600 vehicles contained within a multi-level ready return garage, quick-turn around area (QTA) and idle vehicle storage areas. The ConRAC is the first phase of the development of the new mid-field passenger terminal program, which will include a new 5,400 space public parking garage, ground transportation center (GTC) and mixed-commercial uses linked to the passenger terminal via an elevated pedestrian skybridge spanning the main entrance road to the airport. TranSystems serves as Architect of Record and lead structural and civil engineer for the project. | https://www.transystems.com/our-projects/columbus-rental-car-facility/ |
For the city residents, who travel every day to nearby towns for their work, parking their two-wheelers under the hot sun has been a pain for years. The Corporation parking lot adjacent to the Mattuthavani integrated bus stand has shelters that can accommodate not even 10 per cent of the total vehicles parked there.
In the last few days they have an added problem, that of, inadequate space at the parking lot where hundreds of vehicles come every day. While those manning the parking lot claim that the corporation authorities had taken back a big portion of the land used for parking vehicles, a Corporation official said that they had only asked the contractor to leave the land which he had encroached upon.
In this confusion, often the commuters find “No space” board displayed at the entrance of the parking lot. “These days I ride my bike from my home hoping that I should be lucky enough to find a place to squeeze my bike before catching the bus,” said R. Srinivasan, a resident of Kalai Nagar. On few days, he had to ride back into the city to park his vehicle at his friend's residence and take a city bus to reach the bus stand.
“It is understandable that the Corporation cannot expand the parking lot at Arapalayam bus stand for want of space. But, here they have got lot of land adjacent to it and should allocate more space to match the number of vehicles coming there,” he said.
Corporation sources said that the parking lot was one acre of land given on annual lease to the contractor. “There is another 10 acre land vacant adjacent to it and easily another one acre could be given to the contractor,” the official said. Another regular user of the parking lot, S. Saravanan, complains of lack of basic amenities there. “Rain or shine our vehicles are parked in an open space. Many a time, the engines do not start after good showers. While the men there want us to use the centre stand of the vehicle, it is not possible often because of the loose earth. Vehicles fall one over the other causing scratches and breaking of indicator lights,” he said.
He wanted the officials to provide good shelter and concrete flooring. Either the space should be expanded or multi-level parking like the one at the railway junction should be considered, he said.
Manager of the parking lot, M. Navaneethakrishnan, expressed helplessness. He said that the Corporation, which is the owner of the parking lot, has not provided adequate shelters for the vehicles. “We have put up 12 shelters at our cost. Since the contract changes every year, we cannot afford to put up the shelter for the entire area with our resources,” he said.
A casual counting of the vehicles revealed that at least 300 motorbikes were parked in each of the 10 rows. And most of the 3,000 vehicles were left to remain under the direct rays of the sun. “People complain that their costly vehicles were being ruined due to lack of shelter even after paying Rs. 3 for every 12 hours,” Mr. Navaneethakrishnan said. Often the air pressure in the tubes goes down and also the petrol gets evaporated due continuous exposure to hot sun for more than two or three days.
“So, we started to plant saplings in the nearby areas to create shade. We allowed people to park their vehicles there. But, suddenly corporation officials asked us to vacate the land. People now think that we were not allowing them to park their vehicles in the shade,” Mr. Navaneethakrishnan said.
After losing the space that could accommodate some 500 motorbikes, parking has become haphazard these days. “People park their vehicles every where, especially on the pathway blocking other vehicles which often result in the users complaining,” he said.
When contacted, the Mayor, V.V. Rajan Chellappa, said that additional space would be allocated for parking of vehicles. “If necessary, it would be maintained by the Corporation officials themselves. We will inspect it and take necessary action for the benefit of the city residents,” he said. | http://www.thehindu.com/news/cities/Madurai/no-space-at-mattuthavani-parking-lot/article2888011.ece |
Garages located throughout Norwich and a small number of lockable parking bays are available to rent by council tenants, Norwich residents and non-city residents and commercial users.
A parking bay is a permanent space that has a lockable post to prevent other vehicles using it. We only have a limited number of these and they are sited within the city centre.
Garages are only to be used to park motorised vehicles in and not to be let as storage or workshop facilities.
Find a garage near you
You can search My Norwich by postcode for garages where you live (look for the Garages to rent tab)
Alternatively, download of a list of garages to rent.
Garages are allocated according to the following order of priorities: | https://www.norwich.gov.uk/info/20004/housing/1218/garages_to_rent |
The data on 402 schemes in Kent described in the previous chapter suggests a high level of dissatisfaction with the level of parking on new estates in the county despite a notional surplus of parking spaces compared to car ownership. In order to understand what this means on the groups we have looked in more detail at six case studies. Edinburgh University developed a methodology to select these case studies from the wider data set. They did so by focussing on West Kent, excluding schemes with a high proportion of flats, and focussing on places where there were high levels of dissatisfaction with the parking situation.
Each of the case studies below details the findings of the survey carried out by Kent County Council. The average parking satisfaction rating for the schemes is -83%. This is based on a question asking residents to rate ease of parking as ‘very bad’ (-100), ‘bad (-50)’, neither (0) ‘good’ (50) or ‘very good (100)’ and taking an average of responses. It means that the majority of people rated parking as very bad.
The six case studies, shown on the plan below, include two schemes on the edge of the London Conurbation in Dartford and Gravesend. Two further schemes are in the new village of Kings Hill south of Malling and the final two are on the outskirts of Maidstone.
We undertook site surveys of each of the estates in September 2013 early on a Saturday morning. It had become clear from an earlier visit that we needed to wait until the end of the school holidays and to look at the situation over night at the weekend when the problems are at their most severe. Even during the survey period it became clear that the situation eased as the morning progressed so in each case study we indicate the time of the survey.
Baker Crescent is the most urban of the schemes surveyed. It was developed on a former school site within 600m of the centre of Dartford. As a result it is well served with facilities with good bus services, shops services and schools within a 5 minute walk of the site.
The scheme includes 91 homes all of which are houses although there are adjacent phases of the development that include apartments. The case study site covers 1.56 ha and is therefore built at a high density of 58 units/ha since most of the houses are terraced.
19 of the properties have integral garages and all of these also have a driveway in front of the garage. Because the garages were closed it wasn’t possible to assess whether they were used for parking. However, the fact that most of the garages had cars parked in front of them, were blocked by bins or appeared too small to use, suggested that very few were used for parking. In two properties with garages cars were parked on the property’s front lawn.
The remainder of the allocated parking is in parking courts most of which are at the rear of properties. The scheme includes a mews housetype with accommodation on the first floor and three bays on the ground floor. One of these bays is used to provide vehicle access to a parking court to the rear and the other two are open car ports. Because these were double spaces they did appear to be used. There are only five designated unallocated on-street spaces.
On the site visit we observed 76 cars parked on the estate. However this was one of the last sites to be surveyed by which time many vehicles may have left the estate. Almost a quarter of these parked cars were parked outside allocated spaces. Bollards appeared to have been retrofitted on the corners to prevent people parking there suggesting that this unregulated parking had caused problems in the past although on the day we visited the informal parking was not causing any obstruction.
The amount of informal parking suggests either there had been far more cars parked over night and the parking overflowed onto the street, or people prefer to park informally outside their home rather than in parking courts that are not well overlooked and in some cases quite distant from their home. From our observations, the parking courts do appear underused and we suspect the latter.
Quarry Close is a backland development site along a railway line that has been opened up by the acquisition of a property to create a road access. It is located close to the centre of Gravesend, about 800m from the town centre and within five minutes walk of bus stops and two primary schools.
The scheme includes 60 homes, with 42 houses and 19 apartments. Part of the site along the railway remains undeveloped and it is clear from the planning history that there have been misgivings about the development of the site. The initial application was only allowed on appeal and a subsequent application to develop the land along the railway was refused. With this undeveloped land the density of the scheme is 42 units/ha, without it the density rises to 50 units/ha. The apartments are in two, three storey blocks and the houses built as short terraces.
The scheme includes 22 garages. Ten of these are in garage blocks separate from the houses and the remainder are integral to the houses. These garages are clearly too small for most of the cars on the site, something confirmed by residents during the site visit.
There are 60 allocated parking spaces within the scheme plus 14 unallocated spaces. Each of the spaces, even those in front gardens, is marked with road paint either with the owner’s house number or as ‘visitor’. The road is unadopted and is subject to parking enforcement. Cars parked illegally are clamped by a private company, something that is apparently rigorously enforced. As a result there are signs of severe parking stress on the estate. A couple of the people that we spoke to during the survey suggested that parking was a major issue, that they were no longer talking to their neighbours and that there was no community as a result. Because of the clamping there were only three cars parked on the street but many of the houses were parking on their front garden. One of the residents reported that it was impossible to have a party because there was nowhere for guests to park.
We observed 43 cars parked on the estate although it was late morning by the time of the survey and many cars will have left. Unlike case study one it would appear that the stresses are due to the overall lack of spaces rather than the unattractiveness of allocated spaces. The parking controls remove one of the main safety valves for parking, creating severe tension.
This and the next case study are part of the Kings Hill new village to the south of Malling in the Tonbridge and Malling District. The scheme is one of a number of new villages planned in Kent in the late 1980s, the most well known being New Ash Green. Kings Hill has been developed on a former RAF Airfield and was started in 1989 when there were plans for 2,750 homes of which 2,000 have been built so far.
The village includes two supermarkets, two primary schools (but not yet a secondary school) and a reasonable range of local services.The Hazen Road scheme is just north of the local centre and is within 5 minutes walk of the Asda and Waitrose supermarkets, bus stops and employment premises. The two primary schools are slightly further away but are within a 10 minute walk. Although these are closeby, the whole design of the area, the wide roads and roundabouts and extensive grass verges mitigate against walking, and it would appear that most journeys are made by car.
The Hazen Road scheme includes 122 homes most of which are terraced houses and semi detached units with a few mews units over garages. The layout is very tight with a density of just under 46 units/ha. Hazen Road is designed as a tight winding village street that varies in width but is often little more than 10m wide between properties.
There are 88 garages in the scheme none of which are integrated into the houses. A number of houses have attached garages and there are also garages under mews blocks and in covered parking court areas. The garages seemed to be slightly larger that those in the town-house schemes but nevertheless many had cars parked in front of them implying that they were not being used.
There are 141 allocated parking spaces on the scheme (excluding the garages). Most of these are in quite large rear parking courts, which are accessed via narrow side streets. The parking courts appear to be accessible from back gardens and most also have houses and mews units facing onto them. There are virtually no unallocated visitor parking spaces, just five in individual bays on-street that take up a huge amount of space.
In the visit, which took place mid morning, there were 101 cars parked in the estate. More than a quarter of these were parked in undesignated spaces including many along Hazen Road. Because of the width of the road these cars were all parked partly on the pavement, which in many cases was entirely blocked by the car. This is despite the fact that the deeds of the houses apparently do not allow parking on this street and the fact that it is a point of frustration for some residents. By contrast some of the rear parking courts were quite underused.
As with the first case study it is difficult to know how much of the informal parking is the result of undersupply and how much is the result of people’s aversion to using rear parking courts. Observations and discussions on site suggest the latter is probably the more important factor.
This is second scheme within Kings Hill. It is located just to the south of the local centre and so is within an easy five minute walk of a range of local facilities as well as being directly adjacent to one of the primary schools. There are also bus stops a short walk to the south of the scheme.
The scheme is similar to Hazen Road being designed as a narrow country lane only 10-11m wide between houses with even tighter lanes branching off to either side.
The element of the scheme we have looked at includes 58 homes including one block of 5 flats. These are built at a density of 44 units per hectare, however they are part of a wider estate which is probably built at slightly higher density than this. This scheme is incredibly dense, with a fine grain, many of the houses having little or no garden.
The main difference with Hazen Road is that the parking courts are smaller and much of the allocated parking is within the curtilage of the property . There are more garages and most of the garages also have a parking space in front of them.
This probably means that all of the larger units have two spaces including the garage, and the smaller houses have one allocated space. It was again not possible to assess the number of cars parked in the garages but it seemed likely that these were not used for parking, as in the other estates. There is no unallocated visitor parking on the site at all.
At the time of the visit there were 68 vehicles parked on the estate, more than the number of homes.
Further more, half of these were parked on the street despite the deeds of the houses supposedly forbidding this.
This is likely to be due to a preference, as in other schemes, for parking in front of the property. The fact that there are no unallocated spaces and limited spaces in garages, also means that households with more than one car have no choice but to park on the street.
This is part of a modest urban extension on the southern edge of Maidstone. The scheme was previously fields and sits next to a large social housing estate. This estate is well served by facilities with bus routes, a local parade of shops and a primary school. However the road layout, the connections between these facilities and Roman Way are not particularly clear.
The estate includes 96 houses most of which are either semi detached or in short terraces. The estate consists of one long cul-de-sac and is built at a reasonably high density of 46 units/ha.
Just over half the units are provided with garages and of these, around half were attached to houses and the other half were in parking courts. The latter were not well overlooked and some had been fitted with additional locks suggesting security problems. The garages were the same size as in other schemes and many appeared to be unused. At the entrance to the site there are a series of detached houses with integral garages that were almost certainly not being used for parking.
There are 109 allocated parking spaces within the scheme. There are two rear parking courts but most of the allocated parking is in front of the housing directly off the street. All of the allocated parking is marked with the relevant house number, an indication perhaps that there have been tensions in the past. There are only five designated visitor parking spaces at the entrance to the estate, clearly visitors are expected to park here and walk the rest of the way.
This scheme was visited before eight in the morning so that the actual number of cars parked is probably a better indication of demand than those sites visited later in the day. At the time of the visit there were 126 cars parked on the estate which exceeded the number of available spaces (excluding garages). As you would expect most of the allocated spaces were occupied and cars were parked everywhere on the street including some entirely on the pavement and others blocking visibility on corners.
This small estate lies on the north east edge of Maidstone near the village of Bearstead. While it is within a 7 or 8 minute walk of Bearstead Train Station and is opposite a pub, it is otherwise quite isolated from facilities with no nearby shops, bus services and more than a ten minute walk to the local junior school.
The scheme consists of 32 homes on a long cul-de-sac. The houses on the main road are substantial properties which back onto this cul-de-sac and have garages accessed from the rear. The other properties are mostly large houses with two mews units built over garages.
As with all of the schemes the garages appear to be too small and unused. Indeed there have been planning applications for at least two of them to be turned into living accommodation. The other allocated spaces are in driveways, in two small parking courts and uniquely for this scheme on-street. The road presumably is not adopted because the on-street parking spaces on the bend are marked with house numbers indicating that they are allocated.
There were 44 cars parked on the estate at the time of our visit just under half of which were not in allocated spaces. These cars did make the estate feel congested and made manoeuvring difficult. The road on the bend does not have a pavement meaning that cars are parked partly on the verges causing damage and meaning that residents need to walk in the roadway. | http://spacetopark.org/go/research/case-study |
Police will charge a fine of Rs. 600 for illegally-parked vehicles along Gurgaon’s major stretches from Wednesday to reduce congestion but residents say a chronic lack of parking spaces and a festering local mafia may derail the drive.
Authorities say most Gurgaon car-owners encroach on key thoroughfares such as MG Road while parking lots lie vacant in a city where hour-long snarls during rush-hour traffic are a common sight. They are hoping the stiff fine — double the current levy of Rs. 300 — will stop such practices.
“Our aim is to spread awareness about illegal parking as it obstructs the movements of other vehicles. We will start the drive initially in a few areas,” said deputy commissioner of police (Traffic) Balbir Singh.
But commuters complain the Millennium City that sees over 5,00,000 cars daily has only 24 authorised parking lots to accommodate them and no multi-level parking facilities, despite repeated tenders by civic authorities.
Residents allege an entrenched local gang that runs an illegal parking ring and charges high prices from vehicle-owners has physically assaulted competitors in the past and is responsible for private players keeping away.
“The parking mafia is operating across the city. They even have printed parking slips and acquired land but no action is taken against them,” said Joginder Singh, president of the resident welfare association of Sushant Lok Phase 3.
As a result, proposed multi-level lots in Sector 29 and 43 haven’t materialized due to lack of interest by private bidders. The Haryana government also missed the deadline of constructing seven parking lots before the 2010 Commonwealth Games.
To resolve the issue, police are asking malls to decrease their high parking rates.
“We have asked malls to decrease charges and will encourage visitors to park vehicles inside. The attempt is to make optimum use of the limited road space,” said Singh.
Authorities have also clubbed multiple sections of the Motor Vehicles Act to increase the fine and plan to expand the drive.
“We will soon identify more roads where the traffic congestion is a major issue. Our teams will also visit DLF Cyber city and Udyog Vihar on Friday”, said Singh.
Any driver authorised to pick and drop commuters will be fined on the spot if found standing anywhere on the road. | https://www.hindustantimes.com/gurugram/gurugram-police-declare-war-on-illegal-parking/story-1Ud05DGdi5KoCl7lpTMNUK.html |
Garages & Carports in Owosso, Michigan - Garages & Carports in Owosso, MI: Yellow Pages Directory Inc.
Below is a list of businesses which provide Garages & Carports services. If you do not see your business in the list, you can submit it for addition to this list. Adding your business will feature your listing above Standard listings.
Garages, in Owosso, are defined as any roofed structure, designed to accommodate one or more motor vehicles and attached to the dwelling.
Carports, in Owosso, are also roofed structures that are designed to accommodate automobiles unenclosed except to the extent that these have an adjacent dwelling or a property boundary on one side, and do not possess a door unless that door is visually permeable.
Garages and carports should match the existing dwelling with regard to building materials, colors, finishes, roof pitch and form as well as detailing. Moreover, garages visible from the street are required to be integrated into the design of the dwelling in terms of roof, detailing and materials. | https://www.yellowpagesgoesgreen.org/Owosso-MI/Garages+-and-+Carports |
More about this product
This makes the parking system ideal for increasing the number of parking spaces in the garages of hotels or car dealerships, for example, in a simple and inexpensive way. In the SingleUp 2015, the vehicles must be parked in a specific order – i.e. the bottom parking space must be empty before the platform for the second car can be lowered. Ideally, use the upper platform for long-term parking, e.g. a vintage car, and the lower parking space for short-term parking. | https://klaus.lt/en/product/singleup-2015/ |
Architect: Thomas P. Cox Architects Inc.
The 1540 North Vine mixed-use high-rise, designed by Thomas Cox Architects, will provide a highly desirable revitalizing boost to this urban community. Located within a block of an MTA Red Line station and a major bus depot, this transit-oriented 11-story structure will accommodate 306 market rate apartment units over 67,500 s.f. of retail space at ground level and two levels of subterranean garages with 800 parking spaces. | http://www.lasengineering.com/projectes/1540-north-vine/ |
Ayman Saifeldin Mohammed Ahmed, PMP, PMI-PBA,
PMI-RMP, PMO-CP, SFC, six sigma
[email protected]
+
Civil Engineer
Nationality: Sudanese
Date of Birth: 01-02-1987
Marital status: Married
Resident of: Saudi Arabia / Riyadh
Ayman has been chosen as PMO Manager because of his unrivalled expertise in leading consultancies, developers and government authorities across the Middle East and his past records of successfully delivering major projects and construction services in the ME region. He has 10 years of experience spanning across the Middle East and African countries, of which the last 7 years since 2013 have been in the KSA, with proven leadership and impeccable delivery record. In KSA, Ayman has worked regularly, over recent years, for projects in Dammam, Riyadh and Jeddah. He has led major Construction Management Services for government authorities and privet sectors in KSA.
Among his strengths are significant experience in providing effective leadership on key project functions of major projects. These include; engineering, project controls, contracts, procurements, Health & Safety, risk management, financial management and quality control. He has managed large numbers of staff, focusing on their efficient utilization, recognizing and enhancing their quality through continuous improvement and effective delivery performance.
Ayman has exceptional communication skills strengthened through working internationally and engaging with wide ranging parties and stakeholders of major projects. Having worked in the region for a long period of time, Ayman has an appreciation of the culture and approach to delivery in the Kingdom. Demographics
Profile
ACC for Engineering and Management Consultancy (09/06/2019 – to date) PMO Manager
(VRO / National Transformation Initiatives projects) . Define and build PMO, Appoint / recruit PMO resources based on PMO model, Define organizational model, Provide reporting to senior management and stakeholders, Ensure alignment to strategy, Create working relationships with project managers and other PMO’s, Facilitate governance process, Overall risk management to identify themes, Facilitate dependency management across the projects and programs, Facilitate change control process, Track deliverables and benefit realization, Mentor project managers, Responsible for tools, standards and methodology i.e. project management templates, and line manage project managers for a pro-active / managerial PMO. Euroconsult for Engineering Consultancy Limited (15/4/2018 – 30/05/2019) PMO / Senior Projects Control (Infrastructure Projects ).
Possess scope control and change management proven knowledge, Responsible for evaluating labor cost and hour and manpower requirements against budget constraints, Responsible for Producing and maintaining accurate project schedules and understand schedule resources loading, Managing complete financial cycle such as vendor purchase orders, client, cost proposals and invoices, Administrating market budget and track and report project finances, Planning and executing systems, resources and staff to support project and market schedules, Maintaining current internal client database records and document control systems, Coordinating with vendors, project manager and client to prepare business and technical project documentation, Conducting vendor and internal audits to review accuracy, quality and completeness of database records and documents, Preparing monthly reports and assemble data to consolidate, Creating control budget and review and authenticate cost reports and change notices, Implementing and ensuring cost control processes to report on estimates, expenditures and cost commitments accurately.
Euroconsult for Engineering Consultancy Limited (15/4/2017 – 14/4/2018) PMO/ Contracts and Quality Assurance specialist (Infrastructure Projects ). As a Quality Assurance specialist my work was
Responsible for draft quality assurance policies and procedures, interpret and implement quality assurance standards, evaluate adequacy of quality assurance standards, devise sampling procedures and directions for recording and reporting quality data, review the implementation and efficiency of quality and inspection systems, plan, conduct and monitor testing and inspection of materials and products to ensure finished product quality, document internal audits and other quality assurance activities, investigate customer complaints and non-conformance issues, analyze data to identify areas for Work experience
improvement in the quality system, develop, recommend and monitor corrective and preventive actions, prepare reports to communicate outcomes of quality activities, identify training needs to meet quality standards, coordinate and support on-site audits conducted by external providers, evaluate audit findings and implement appropriate corrective actions, monitor risk management activities, responsible for document management systems, assure ongoing compliance with quality and industry regulatory requirements.
As a contracts specialist my work was:
responsible for Negotiates with sub-consultant to draw up procurement contracts: Negotiates, administers, extends, terminates, and renegotiates contracts, Formulates and coordinates procurement proposals, Directs and coordinates activities of workers engaged in formulating bid proposals, Evaluates or monitors contract performance to determine necessity for amendments or extensions of contracts, and compliance to contractual obligations, Approves or rejects requests for deviations from contract specifications and delivery schedules, Arbitrates claims or complaints occurring in performance of contracts, Analyzes price proposals, financial reports, and other data to determine reasonableness of prices, May negotiate collective bargaining agreements, May serve as liaison officer to ensure fulfillment of obligations by contractors.
ITTIJAH Consultant office (2/10/2016- 14/4/2017) PMO/ Project manager (construction for bank building hypermarkets and hotels)
Set standards for how projects are run, Ensure project management standards are followed, Gathering of project data and production of information for management review, Source of guidance and advice for project managers, Managing and facilitating the portfolio management process and As a project manager, the job was to plan, budget, oversee and document all aspects of the specific project which working on. work closely with upper management to make sure that the scope and direction of each project are on schedule, as well as other departments for support. Al-Kayan Consultant Office / Ministry of Education (9/5/2013- 1/10/2016) – Project Manager (Construction for school buildings )
Project manager and supervising the implementation of projects Planning and Defining Scope, Activity Planning and Sequencing, Resource Planning,Developing Schedules, Time Estimating, Cost Estimating
,Developing a Budget,Documentation,Creating Charts and Schedules,Risk Analysis,Managing Risks and Issues,Monitoring and Reporting Progress,Team Leadership, Strategic Influencing,Business Partnering,Working with Vendors,Scalability, Interoperability and Portability Analysis,Controlling Quality and Benefits Realization.
MUEDA ALI ALANZE enterprise for construction (1/3/2012-1/5/2013) – Planning and Cost control Engineer ( Construction for Aramco project and Aljazeera showroom buildings)
Preparation of schedules,risk analysis, cash flows diagrams and follow up the progress of the projects, Coordination between the administration and project management of Business Administration, Coordination and follow-up business between municipalities and government agencies and also give the Technical support for project and develop appropriate plans with cost analyses and estimation for the projects.
Ministry of Planning and Urban Development.(21/10/2010-8/2/2012) – Site Engineer
(Suspension bridge, road construction and Sewage ) Observing the conduct of business against the set plan and schedules and reporting to direct manager, Reporting in the case of a business to an additional materials or mechanisms,Ensure of rescuers rules commitment Occupational Safety and security,Coordination with the Advisory work regular meetings and supervise the work progress and writing detailed reports and also Follow up the payments and the work done against the financial flows based "on the amount of work done and submit reports thereon and Overseeing the implementation of the business by the tables set and on schedule and Ensure matching quantities actually set amounts and reporting crisis and also Monitor the implementation of the business and make sure of implementers commitment set.
Education And Professional Communication
Bachelor (Honors) in Civil Engineering (structure) from Sudan University of Science & Technology – College of Engineering Department of civil engineering (2010).
Diploma in project management from Alison institute - Ireland (2015).
PMP certified.
PMI-PBA certified
PMI-RMP certified.
PMO-CP certified.
Scrum Fundamental certified.
OSHA certified.
6sigma certified.
Prince 2 preparation certification.
PMI-ACP preparation course (will take the exam soon ).
Earn Value management certification.
SWOT analyses technique certify
Scheduling
Negotiation.
Budgeting.
Risk management.
Leadership
Communication.
Problem Solving.
Languages:
Arabic (Mother tongue).
English. (Fluent). | https://www.postjobfree.com/resume/ada1e4/pmo-pmp-civil-engineer-riyadh-saudi |
The Scientific Board (SB) is designed to provide high level supervision of network's integration activities and to promote and assess the scientific quality of the network. The SB is delegated to manage and direct the overall development of the project and its scientific and technological objectives. It will promote and assess the quality of the work; define mechanisms for implementing major technical decisions; identify the need for significant changes in the work plan and oversee the delivery of all work packages. The SB is composed of the Project Coordinator and all Work Package Leaders.
The SB will monitor the scientific and technical direction of the project and review and/or amend the work plan as required, approve major technical decisions, recommend financial and other resource allocation and approve periodic and final progress reports. It will:
- Promote and assess the scientific quality of the activities and approve all official deliverables
- Decide and approve any budget variances.
- Review and/or amend the work-plan, cost or time schedule under the EC Grant Agreement.
- Identify project-level risks, track them, and propose corrective action in the event of problems.
- Resolve problems, proposing corrective actions and ensuring all partners meet their obligations.
- Support the Project Coordinator in coordinating the project's efforts for review meetings.
- Identify and decide any replacement for individual involved in the project, if needed.
The Scientific Board can appoint specific committees for carrying out well-defined tasks such as, for example, assessing the requests for the visitor program or applications to competitive calls that will be issued by the Network.
The Scientific Board will meet physically twice a year. In between the SB will hold monthly Skype or teleconference meetings. The Project Coordinator will be responsible for invoking the SB meetings and will chair them. | http://clef-initiative.eu/web/guest/scientific-board |
We are an international company with a workforce of more than 38,500 top professionals, present in more than 40 countries across the five continents. Leaders in Innovation and Technological Development at the service of society, we’re looking for experts in designing a better planet who can go out there and promote sustainable development and find solutions to the biggest global challenges, including global warming, overpopulation and water scarcity.
Job Description
RESPONSABILITIES:
Manage the Cost Control and planning department.
Direction of schedule development, maintenance, monitoring, impact identification and recovery plan development activities on Projects.
Develop and maintenance of cost control procedures for the project.
Development of policies, objectives and standards applicable to cost and schedule control at both business and Project level.
Develop the CBS of the project in conjunction with the managers of the different areas.
Establish a cost baseline (“0” Budget) for measuring project performance.
Managing the cost engineering, procurement and construction of the project.
Participate with the Project Director to define the cost Project objectives.
Ensure that the project cost objectives are meet.
Controlling the rates and cost of the materials and labour.
Update the official budget and subsequent revisions of the budget in the project control system.
Supervise the Planning and reporting activities
Interact with the project teams, especially with procurement, production, financial and engineering.
Monitor cost performance to detect and understand variances from plan.
Control, track and follow-up on subcontractor costs, actual progress, certifications, schedule, delays and variations (if any)
Prepare unit prices, along with the construction & purchasing team, in accordance to the needs of the project, subcontracts and related matters.
Calculate an EAC (Estimate at Completion) as a forecast of most likely total project costs.
Ensure that the final changes are accurately record in the cost baseline.
Analyse deviations and corrective actions proposed in conjunction with area managers.
Proactively identify and recommend solutions to improve processes.
Provide ‘Early Warning’ notifications as issues arise.
Monthly provide to the Financial Department the information required to guarantee the update of the Cash Flow (CF).
Control Operational Reserves and Contingencies.
Provision of guidance and support to construction teams; including reporting structures and requirements, forecasting, timetable guidance, transparency.
Update personnel schedules in terms of costs.
Prepare according to the company's calendar, the Project forecasts regarding Billing, Production and margin, based on the current Planning program.
Monitor that all costs are record in the system and in the correct activity.
Coordination with the accounts payable manager that all the provisions are on the system on time.
Compare production with progress.
Control and record Payment Claims and Change Orders.
Promote teamwork, maintaining and promoting collaboration among the different areas of the project.
Search out the “whys” of both positive and negative variances.
Ensure the confidentiality of the data, communicating if there is any problem for compliance.
Assist in the development of the management processes to identify and evaluate business areas risks and risk and control self-assessments.
Management of cost transfers.
Monitoring and control the project risks and opportunities.
Control the performance rates of the Project through Earned Value in terms of costs.
Preparation of integrated reports on schedule and cost including development of Key Performance Indicators, Critical path monitoring and trend analysis.
Schedule and Cost Control advice to Operations Directors, Project Directors and project teams.
Trend analysis and delay recovery identification.
Overseeing, directing and supporting the project control and schedule teams to ensure
Reporting to the Project Director and Madrid Head Office on the cost and schedule progress (timely and accurate preparation).
Send analysis report to Project Director for the Steering Committee.
Required Skills and Competencies
QUALIFICATIONS AND SKILLS:
Bachelor´s Degree: Engineering (preferably Civil Engineer and Mechanical Engineer)
At least 8/9 years in cost control on Site Experience
At least 2/3 previous project experience
Hybrid profile focused on cost control but also with planning knowledge.
Reliable, well organized and disciplined.
Highly focused and committed to achieving results, able to develop and maintain solid working relationship at all level.
Ability to effectively lead and communicate with entire project.
Team Ability to facilitate project meetings.
Able to foster outstanding customer service and positive client relationships.
Analytical Ability, Initiative, Innovation and creativity.
Team building, willingness, adaptability and flexibility to change.
Self-motivation for learning, independence and decision-making.
Delegation, Personnel Management and development.
Results-oriented, customer-oriented, persuasion and negotiation
Demonstrated ability to manage through influence at all levels inside the organization.
Leadership in meetings.
Able to handle intercultural teams.
To take into account SEO knowledge, Good knowledge of Office software, especially in Excel and other cost control software.
Knowledge of Primavera P6
High level of English. C1 degree or similar (Complete professional competence)
We are an equal opportunity employer with a commitment to diversity. All individuals, regardless of personal characteristics, are encouraged to apply. | https://canalemprego.acciona.com/DetalleOferta.aspx?id=20005567 |
The Project Manager will focus on the execution in term, budget, quality, and security of Renewable Energy in the areas of solar PV and onshore wind projects.
Responsibilities
- Manage all resources, including external ones, assigned to the project in order to control and report its status.
- Designate project responsibilities and the necessary resources for its execution.
- Coordinate and monitor the activities of the project team and the result of it, both technical and economic, with a strong focus on safeguarding the quality and security required.
- Identify potential issues, implement and monitor improvement and/or corrective actions project management wise (schedule, scope, cost, quality, safety).
- Participate in the procurement process for 1) Work contracts; 2) Studies and projects; 3) Owner's Engineering or other Technical Advisory roles.
- Serve as relationship lead and technical controller for external partners such as service providers, developers, suppliers, etc.
- Report the activities’ progress vs schedule, budget vs. actual analysis (together with a financial controller), contractual issues, and results of the project.
- Know and apply the policies and procedures of the company
Requirements
- Higher Degree in Civil / Industrial / Energy Engineering or similar.
- PMI is a preferable complementary training.
- At least 4-5 years of experience, with a track record as Project Manager in renewable energy projects, ideally both, solar and wind.
- Fluent English.
- Geographic and travel availability within Iberia and sporadically to Germany. | https://www.next-wavepartners.com/job/project-manager-7/ |
Project Management .Tuesday, July 13th 2021.
Quote from Project Management Tracking System:
Once your project is underway and you have an agreed plan, you will need to constantly monitor the actual progress of the project against the planned progress. To do this, you will need to get reports of progress from the project team members who are actually doing the work. You will need to record any variations between the actual and planned cost, schedule and scope. You will need to report any variations to your manager and key stakeholders and take corrective actions if the variations get too large.
Risks are an unavoidable part of a project and the project manager must work to prioritize and identify any possible scenarios for the project's duration. This is part of the crucial planning process. Not all risks will have the same impact as others. Although the project manager has the overall responsibility for risks, all of the stakeholders can assist in identifying and dealing with risks in a proactive manner.
In conclusion, while there is much more to formal project management and the memorization and application of proven methodologies, it is the author's hope that this will benefit you to some degree and that maybe you will even have a take away to apply to your own project. I wish you all the best in your project management endeavors.
The planning phase is really just getting the major people together that will own part of the project work and planning how they will do it and what they will need to get it done. In the business world, these are the Subject Matter Experts.
Top Ten Posts
Recent Posts
Categories
Tag Cloud
Copyright © 2021. DonutCrazyCT. All Rights Reserved.
Any content, trademark/s, or other material that might be found on this site that is not this site property remains the copyright of its respective owner/s. In no way does LocalHost claim ownership or responsibility for such items and you should seek legal consent for any use of such materials from its owner. | https://donutcrazyct.com/project-management-tracking-system/best-online-agile-tools-list-for-project-management-in-the-digital-manager-ravetree-dashboard-screenshot-sof-2/ |
ASC Nigeria Limited is a member of the Onstream Group and was founded in 2001. At the core of everything we do is the promise to actively and ethically have positive impact on the Nigerian economy. We empower and develop local & international talents for the benefits of our clients.
We dedicate ourselves to be solution providers and assist our clients all around the country in many different sectors. We screen and recruit experts based on their talents, technical skills and experience to ensure they will meet and exceed our customers ‘expectations.
We are recruiting to fill the position below:
Job Title: Principal Planning Engineer
Location: Rivers
The Job
- The successful candidate would be required to develop, manage and maintain a centralized planning and scheduling system for the overall Client’s CAPEX Projects Portfolio that will ensure effective schedule control, facilitate project delivery within established targets and provide key information for management reporting to achieve Departmental KPI’s.
The duties will include, but are not limited to the following:
- Provide project management support in the development of the Client’s Project Portfolio activities, resources planning leading to the creation of the overall Project Portfolio baseline schedule and its subsequent maintenance to achieve schedule performance.
- Maintain the overall Project Portfolio schedule at level 1 to 4 required to develop and execute each approved project in their hierarchy relationship to meet the Client’s projects requirements.
- Develop and update the planning and schedule procedures to ensure sustainable and continuous improvements in line with the Client’s Quality Management System requirements.
- Execute schedule control activities to monitor physical progress against key deliverables and track trends on schedule over-run and under-run to identify timely corrective actions for the achievement of schedule performance targets.
- Provide input to monthly progress updates; forecast completions, together with the critical path and trend analysis as input to management reporting to reflect how Client’s project objectives are being achieved.
- Evaluate bid schedules and review contractor execution plan and develop the key milestone dates for critical projects for alignment with the overall integrated Project Portfolio Schedule. Verify that the contractors reported progress is correct and accurately reflected in the overall Project Portfolio integrated schedule for accurate management reporting.
- Review scope change requests for the Impact on schedules and the critical path of the project and prepare reconciliations between earlier schedules and the current schedule and provide schedule impact assessments to change control process for accurate progress monitoring.
- Perform risk and sensitivity analysis on project schedule or critical path analysis and evaluate impaction overall Project portfolio plan.
The Person
The right candidate should possess the following:
Education:
- A University Degree in Engineering
Experience:
- 12 years post-graduate experience, of which at least 8 years in a planning role in oil and gas industry.
- An understanding of contracting strategy, project optimization processes, construction constraints are prerequisite for the job.
- Extensive skills in the use of Primavera-P6 planning tool is mandatory.
- Ability to work across cultural boundaries
- Preferred: Experience with qualitative modeling techniques and tools (Primavera Risk Analysis or similar).
Application Closing Date
5th August, 2021.
How to Apply
Interested and qualified candidates should forward their CV to: [email protected] using “Principal Planning Engineer” as the subject of the mail.
Note
- Only shortlisted candidates would be contacted for interviews.
- Any false information provided during or after the application process will lead to outright disqualification of such candidate(s). | https://hotgist.com.ng/principal-planning-engineer-at-asc-nigeria-limited/ |
The Government Accountability Office (GAO) issued two reports on March 31 calling for improved information controls at the Treasury Department’s Bureau of the Fiscal Service, and said the Bureau has not completed corrective actions on many of the problems that GAO found in its prior-year audit.
The Fiscal Service Bureau manages the Schedules of the General Fund, which reports the Federal government’s trillions of dollars of cash activity, and the government’s budget deficit. The Fiscal Service also oversees the Schedule of Federal Debt, which totaled $22.8 trillion as of September 2019.
GAO said in the report on the Schedules of the General Fund that previously identified weaknesses had not been rectified, and it identified six new information control deficiencies. Three of those relate to access controls, one to configuration management, and two to segregation of duties. Six recommendations were made by GAO in a “limited official use only” report.
Seven of the 25 open recommendations from prior GAO reports on Schedules of the General Fund were completed, and corrective actions were still in progress for the other 18. The open recommendations relate to security management, access controls, and configuration management.
GAO said that Fiscal Service has made “some progress” since the last audit, but listed unresolved deficiencies in the report regarding the Schedules of the Federal Debt, and found new control deficiencies as well.
Of GAO’s 14 recommendations for the Schedules of the Federal Debt from the prior-year audit, corrective action was completed on three of those, and in progress on the other 11 items.
“We continued to identify instances in which Fiscal Service did not remediate on a timely basis or adequately track for remediation known information system vulnerabilities and deviations from baseline security requirements,” the report said.
The Bureau of Fiscal Service, responding to the “limited official use only” version of the two reports, stated that it continues to work to address all prior-year recommendations, and has established plans to address the new recommendations. | https://cdn.meritalk.com/articles/gao-cites-treasurys-fiscal-service-bureau-for-numerous-info-control-problems/ |
A Guide to the Project Management Body of Knowledge2 defines project execution as coordinating people and other resources to carry out the (project) plan. This definition is deceptively simple; under the direction of the project manager, the project team, vendors, and others carry out the tasks that are defined in the project plan in order to produce the project deliverables.
However, project managers must not only ensure that work is progressing as planned, but also must monitor all aspects of project execution, in particular the financial results. This is project control, which ensures that project objectives are met by monitoring and measuring progress regularly to identify variances so that corrective actions may be taken. | https://www.oreilly.com/library/view/project-management-accounting/9781118078228/c01anchor-3.html |
· To undertake construction fault diagnosis, determine corrective action and produce designs to solve issues, procure the works and administer building contracts.
· Planning, execution, and monitoring of minor works up to large building projects and planned maintenance programmes.
· Liaising directly with Contractors, arranging site and building access and overseeing construction works ensuring full compliance with all aspects of Health and Safety; CDM Regulations and DHB Contractor Safety Code requirements.
· To assess Contractors, Consultants and Suppliers against safety and quality competency criteria and make recommendations for appointment.
· To assist, when required, as Project Manager in agreeing interim certificates for payment, final accounts and claims resolution on maintenance/project works.
· To liaise with architects, consultants and quantity surveyors appointed to take control of a project on the Board's behalf, including the formulation of the "brief" and the progress of work on site in respect of brief, time, quality, safety, financial targets and operational constraints.
· Providing a flexible approach to normal working time, work overtime and attend emergencies and breakdowns as required and when available. The post will require a high degree of flexible and adaptable team working including some out of hours and call out attendance.
· Problem solving; detecting and analysing underlying problems and taking long term corrective actions and fault finding.
· To comply with environmental instructions and to identify potential risks to the environment, or areas where environmental impact can be reduced.
· To comply with all safety instructions and to accept responsibility for your own safety and the safety of others who may be affected by what you do. To identify potential risks in the work area, and to report such risks promptly to management.
· Any other duties assigned by the manager or supervisor from time to time. Such duties will be reasonable in relation to the job holder’s skills, abilities and status. | https://www.kentjobs.co.uk/job/30025056/building-senior-technician-/?LinkSource=SimilarJobPlatform |
With over six decades of business and technical experience in the mining, energy, and infrastructure sectors, we see challenges evolving in every industry. We respond quickly with solutions that are smarter, more efficient, and innovative. We draw upon our 9,000 staff, with experience in over 150 countries, to challenge the status quo and create positive change for our clients, our employees, and the communities we serve.
Looking to take the next step in your career? Hatch is currently seeking a highly experienced Lead Planner to join our Project Delivery Group (PDG) in Australia. The Lead Planner will manage and oversee the Planning & Scheduling functional group on a large multidiscipline EPCM project and be a core management team member playing a pivotal role in key decision making and leadership.
Hatch’s project delivery teams help clients reach their business goals through a range of strong technical capabilities and solution-focused leadership. We ensure world-class project delivery through the skills of our people, methodologies, governance, and systems. Our engineering, project management and construction disciplines drive for safe, efficient, and sustainable delivery of projects across our global metals and mining, infrastructure, and energy sectors.
We support our clients through their entire project lifecycle. From identification of potential business opportunities; through front-end evaluation and definition studies into project execution and commissioning; and increasingly into their ongoing operations by supporting business efficiency improvements and ultimate operations closure.
Join our team and become part of a community that strives for positive change, by providing the best solutions for our clients’ toughest challenges!
As the successful candidate, you will:
- Be responsible for the overall delivery of the Planning & Scheduling function including execution planning, schedule maintenance, change management, progress measurement and evaluation, trending and forecasting, and reporting.
- Assign, lead and mentor the Planning team to ensure that staff have the necessary functional skills and “team” approach, to support the client and project team to positively influence project outcomes.
- Facilitate schedule quantitative risk analysis as appropriate to ensure schedules are suitably provisioned to manage project risk and uncertainties.
- Establish and maintain the planning baselines used to manage the project, control change, and evaluate performance.
- Perform schedule control by determining critical path analysis, planned value, earned value, schedule variance, schedule performance indices and schedule risk analysis, and recommending corrective actions when required.
- Update project schedules to reflect progress against project plan and preparing management reports.
- Integrate client, contractor, vendor and other stakeholder input as required.
- Work with other project functional groups (including engineering, procurement and construction teams) to monitor progress, identify trends and risks, and evaluate performance, to support project decisions.
- Proactively coordinate schedule risk management and recommend corrective schedule actions to mitigate identified schedule threats.
- Publish accurate and timely schedule reporting to support control of the work and inform all project stakeholders.
You bring to the role:
- Strong leadership and “team-player” qualities
- Tertiary qualification in engineering or relevant discipline or equivalent related experience
- 10+ years’ similar experience in an execution project environment with detailed knowledge of Execution Planning, schedule quantitative risk analysis, schedule maintenance and change management, progress measurement and evaluation, trending and forecasting, and reporting
- Hands on experience in planning and preparing logic diagrams, critical path networks and bar charts, with particular emphasis on detailed construction schedules
- Experience in high value, multi-discipline EPCM project environments from inception to close-out
- Strong personal computer skills (particularly MS Office suite) and relevant project controls specific software skills
- Excellent written and oral communication skills, with the ability to convey project technical information to project stakeholders
- Organizational skills, with the ability to multi-task, prioritize, take direction, and provide direction to others to ensure the quality of deliverables
To be eligible to apply, you must be an Australian / New Zealand citizen, hold permanent residency status, or alternatively, a Visa that entitles you to work in Australia.
To submit your application please click the "Apply" button below.
Unsolicited resumes from recruitment agencies will not be accepted for this position.
Why join us?
- Work with great people to make a difference
- Collaborate on exciting projects to develop innovative solutions
- Top employer
What we offer you?
- Flexible work environment
- Long term career development
- Think globally, work locally
As an accredited Employer of Choice for Gender Equality (WGEA) and Equal Opportunity Employer, we are committed to fostering a workforce in each of our locations that reflects the diversity of the communities in which we operate. Within Australia, this includes supporting and encouraging a flexible workplace and a comprehensive benefit offering. If you have any special needs requirements, please discuss with us and we will do our utmost to accommodate your request. | https://jobs.hatch.com/job/Perth-Lead-Planner-Perth-West/888673500/ |
Reported protocols for the differentiation of human PSCs toward cerebellar neurons.
Abstract
The most affected cell types in cerebellar ataxias are the cerebellar neurons, which are not readily accessible for cellular and molecular investigation. Pluripotent stem cell (PSC) technology has emerged as an important tool for generating diverse types of neurons, which are used in order to better understand the human nervous system development and pathologies. In this chapter, the strategies for the differentiation of human PSCs toward cerebellar neurons are overviewed, followed by an outlook of their further optimization and diversification by implementing the knowledge from cerebellar development and new cell culture approaches. The optimization stategies are based on the recent progress made in defining the cell populations in mature and developing mouse and human cerebellum. The cellular phenotypes and organization in mouse and human cerebellum are briefly presented, followed by an overview of our current knowledge about their development, which includes pattering, proliferation, neurogenesis, gliogenesis, migration, connectivity and maturation. To date, however, relatively few studies have used induced PSCs (iPSCs) to model cerebellar ataxias and even fewer have looked directly to cerebellar neurons. The reported iPSC-derived in vitro models for cerebellar ataxias are reviewed, followed by an outlook of how to improve these models by generating and exporing the cerebellar neurons.
Keywords
- cerebellar ataxias
- iPSC-derived cellular models
- cerebellar neurogenesis
- Purkinje cells
- cerebellar organoids
1. Introduction
Cerebellar ataxias constitute a very heterogeneous group of diseases in which the motor incoordination is caused by the dysfunction and degeneration of the cerebellar neurons. Although different causative genes or toxins have been identified and several pathological pathways have been investigated, the treatments for these conditions are still largely palliative. Therefore, it is an urgent need for disease-relevant cellular models for studying disease progression and screening for potential therapies.
The rapid development in the field of induced pluripotent stem cell (iPSC) technology offers the opportunity to combine the genetic authenticity of the patient-derived cellular models with the disease-relevant cell types. Human iPSCs have been generated from a wide variety of easily accessible tissues, including skin and blood cells, using methods which nowadays are safer because they avoid the genomic integration of the viral vectors containing reprogramming factors. The potential of iPSCs to differentiate into any cell type of the body was previously explored by the studies with mouse and human embryonic stem cells (ESCs), which are blastocyst-derived pluripotent populations. Both iPSCs and ESCs may offer direct access to study the cells making the nervous system, but straightforth for disease models are the neurons differentiated from iPSCs, generated from patients with a variety of neurologic or neurodegenerative conditions [1, 2].
Although significant advances have been made, most of the protocols for the differentiation of human PSCs into neurons yield cellular populations which can only partially mirror the functional characteristics detected
As it happened for the generation of other human neural or non-neural cells and especially for the generation of the cerebral cells (reviewed in [3, 4]), the improvements in the generation of cerebellar neurons will definitely come from a better knowledge of the human cerebellum and its developmental pathways.
The human adult cerebellum is the second largest brain part (after the cerebral cortex) and contains around 80 billion neurons (which represents four times more neurons than in the cerebral cortex) [5, 6, 7, 8]. These neurons contribute to the complex cerebellar functions, including the control of movements for performing fine-tuning and coordination [9, 10], as well as of cognitive and emotional processes [11, 12]. The morphological and functional organization in the cerebellum, intensively investigated in rodents, is highly conserved across vertebrates . Both human and mouse cerebella contain two lateral hemispheres connected by a region named vermis. The lateral hemispheres are subdivided into lobes and lobules and, together with vermis, covered by a uniformly pliated gray matter forming the cerebellar cortex. Cerebellar neurons have their cell bodies (somas) located in the cerebellar cortex and in the nuclei situated inside the white matter of each cerebellar hemisphere, called deep cerebellar nuclei (DCN). There are four distinctive DCN in mouse (dentate, fastigial, emboliform and globose), while the last two are fused as the interposed nucleus in human [10, 13].
The higher number of lobules in humans makes the cerebellar cortex more expanded relative to mice; in spite of the increase in size, both the volume of the cerebellum as a percentage of the total brain and the ratio of the number of neurons in the cerebellum to the cerebral cortex is remarkably constant across mammalian species, pointing to the concomitant increase of the cerebellum and the cerebral cortex in humans [6, 8, 14, 15, 16, 17].
The morphological organization of the adult cerebellum is schematically presented in Figure 1. The neurons located in the cerebellar cortex form three laminar structures laying between the internal white matter and the external pia mater: the granular layer (GL, named also the inner GL), the Purkinje layer (PL) and the molecular layer (ML). The GL contains the densely packed granule cells, which are the most abundant cell type in cerebellum and in the whole brain, as well as few other cells, such as Golgi cells (with different subtypes, such as Lugano, globular and candelabrum) and unipolar brush cells. PL is a narrow middle zone that contains the large cell bodies of the Purkinje cells, together with the cell bodies of a special type of glial cells named Bergmann glia. The ML contains mainly cell projections, but also a few entire neurons such as the basket cells located near the PL and stellate cells located near the pia mater.
In addition to the shape and location of their cell bodies, the cerebellar neurons are characterized by other intrinsic properties included in their neurochemical profiles (neurotransmitters, associated neuropeptides and receptors), electrophysiological profiles and, in the recent years, in high-throughput transcriptional fingerprints. Based on the neurotransmitters used for synaptic communication, cerebellar neurons are set into two
Regarding tissue architecture and connectivity, the cerebellar neurons are arranged as repeating units in a highly regular manner, relatively identical in all areas of the cerebellar cortex. Granule cells and excitatory neurons in DCN are projection neurons, while inhibitory neurons in the cortex (Golgi cells, stellate cells and basket cells) and DCN, and the unipolar brush cells are interneurons. Granule cells receive excitatory signals from neurons of the brainstem or spinal cord, mainly with a station in the middle or inferior cerebellar peduncle,
The axons of granule cells project to the ML, where they form the parallel fibers, which intercept the dendrites of Purkinje cells at right angles. There are ~200 granule cells per Purkinje cell in mice, while in humans there are 3000 granule cells
Remarkably, Purkinje cells can exhibit two distinct types of action potential, with simple and complex spikes. The simple spikes represent an autonomous pacemaker activity, with very little variability between spiking intervals, firing in absence of synaptic inputs. The simple spikes can be modulated by inputs from mossy fiber
A more extensive neuronal characterization was recently performed by high throughput sequencing, including single-cell sequencing for mouse and human cerebellar tissue [25, 26]. In spite of their quite regular morphology, the cerebellar neurons in each subclass appear as a heterogeneous population, different subsets being defined by several molecular cues, including co-neurotransmitters (e.g. glycine) and neuromodulators (e.g. calbindin, parvalbumin). Markers of some subclasses are related to the position in the cerebellar areas (reviewed in ). In addition, a comparative high throughput analysis of mouse versus human cerebellar cells using single cell-RNA sequencing showed that several genes are expressed in human but not in mouse Purkinje cells and confirmed at protein level the expression of novel and specific human Purkinje cell markers, in line with the data from the cerebral cortex [28, 29].
Recent progress in genetic technologies has significantly clarified how the cerebellar cells and their circuits are formed in model organisms, especially in mouse [30, 31, 32, 33]. Remarkable advances were made not only in defining of the molecular phenotypes and the differentiation pathways for most of the neural progenitors, but also in understanding of how these synchronize for forming neuronal circuits. Purkinje cells have major roles also during development . They orchestrate the long lasting neurogenesis of the granule cells, the most abundant local excitatory neurons, and the maturation of the local inhibitory neurons, which reciprocally respond by helping in their own maturation.
The human-specific morphological and functional attributes were intensively studied over the last two decades, including for the development of the cerebellum. Mouse mutants for different genes related to developmental diseases affecting the cerebellum in humans demonstrated a considerable evolutionary conservation of the molecular programs across species, but also revealed some human-specific differences. Recent investigations of the developing human cerebellum have emphasized some differences in the organization of the cerebellar progenitor pools. Other human specific differences have been outlined by the single-cell sequencing of different brain cells, including cells in the cerebellum. These high throughput results point out that we still have much to learn about the human cerebellar development, composition and functions.
To what extent can or could the cellular diversity in the adult human cerebellum, and, in the same time, the spatial precision in its organization
The reported strategies for the differentiation of human PSCs toward cerebellar neurons, especially toward Purkinje cells, are reviewed in this chaper, followed by an outlook of their further optimization and diversification by implementing the knowledge from cerebellar development and new cell culture approaches. This outlook incudes an overview of the recent progress made in defining the cell populations in developing mouse and human cerebellum, followed by our current knowledge about their development, which includes pattering, proliferation, neurogenesis, gliogenesis, migration, connectivity and maturation. This knowledge is also the basis for the establishment and optimization of the PSC-derived models for cerebellar ataxias. An overview of the reported
2. Differentiation of pluripotent stem cells toward cerebellar neurons
Over the past 20 years, human PSCs, including the ESCs and the iPSCs [35, 36, 37, 38], have revolutionized the research on human development and diseases, particularly for the nervous system. Considerable progress has been made in converting human PSC into different types of neural progenitors, from which some continued to differentiate toward different classes of neurons,
Most of the reported human PSC-based protocols are an adaptation of the protocols that were previously developed for mouse ESCs, which reflect, to a various extent, different stages of neural differentiation in mouse embryo. On this line, the differentiation of the human PSCs is expected to reflect different stages of neural differentiation in human embryonic and fetal stages. Remarkably, recent data have demonstrated that several protocols starting from human PSCs produced authentic neurons and structured brain-like tissues, including the cerebral cortex, the most complex structure in the human brain. However, many questions remain about the extent to which the relative simplistic
For the neurons making the human cerebellum, the progress of
Increasing understanding of cerebellar development has allowed the elaboration of several protocols in the last years, which made the production of some classes of cerebellar neurons possible, with increasing efficiencies. These protocols were implemented in 2D and 3D cell cultures, or in their combination. As for other brain regions, the differentiation protocols include “directed” steps, meaning controlled differentiation by using extrinsic manipulation approaches, but also steps in which the differentiation advances spontaneously. Most of the protocols use morphogens/growth factors or small molecules with similar functions, which are sequentially administered to mimic the environment
Two early studies implemented the mouse ESCs differentiation into cerebellar neurons, using different approaches [44, 45], which were followed by several protocols aimed to increase their efficiency. Su et al. used non-adherent ESC cell clusters in serum-free medium supplemented with fibroblast growth factor 2 (FGF2) and insulin. The cellular spheroids, named serum-free embryoid bodies (SFEB, even though they contained mainly undifferentiated cells in this stage), gradually differentiated into more complex 3D cell aggregates containing a mixture of progenitor cells and neurons, which included some granule cell progenitors and few neurons expressing early Purkinje cell markers. Following the same conditions, Muguruma et al. showed that the FGF2-treated neural progenitors presented a broad fate, but some cells organized in tissue-like structures resembling the cerebellum origin in the embryo. These 3D cell aggregates further formed brain organoids, which contained some areas organized as a primitive cerebellar tissue. When cyclopamine, a sonic hedgehog (SHH) antagonist, was added to block the spontaneous ventralization, the proportion of cerebellar cells was increased, including 35–42% Purkinje cell progenitors by day 11 of ESC differentiation. Additionally, this study introduced the selection of the cerebellar progenitor cells, addressing to a cell-surface marker expressed in this population (Kirrel2/Neph3). The selected cells survived and integrated into the mouse cerebellum following
Salero and Hatten succeeded in generating mouse ESC-derived granule cells at a relatively high efficiency by implementing a protocol in 2D culture based on step-related treatments with different morphogens. FGF8, WNT1 and retinoic acid (RA) were used in the first step, while bone morphogenic proteins (BMPs) were used in the next step to obtain the granule cell progenitors, which were next proliferated with SHH and Jagged1 and showed markers expressed in GL
The pioneering studies of mouse ESC cerebellar differentiation were next translated to human PSCs and subsequently refined (Table 1). The protocol of Muguruma et al. in 3D culture was applied to human ESC and iPSCs [49, 50, 52]. Human progenitor cells self-organized in polarized neuroepithelium containing around 10% KIRELL2+ cells after 20 days. Muguruma et al. also refined this protocol and followed a long-term ESC differentiation in 3D culture, an approach which resembled the first generation of human brain organoids. They found that the dorsal hindbrain patterning is more efficient for human cells without cyclopamine. Sequential addition of FGF19 and stromal cell-derived factor 1 (SDF1) generated approximately 28% KIRREL2+ cells (representing the progenitors of the cerebellar inhibitory neurons) and 18% ATOH1+ cells (representing the progenitors of the cerebellar excitatory neurons) by day 35. As for the mouse protocol, KIRREL2+ cells were subsequently selected by fluorescence activated cell sorting (FACS) and differentiated into Purkinje cells in co-culture with murine granule cell progenitors. The
|General procedure||Hindbrain patterning||Cerebellar progenitors||Cell selection||Neuronal maturation||References|
|VZ||RL|
spontaneous IsO induction
cerebellar organoids
|FGF2, insulin, cyclopamin||KIRREL2+||—||on organotypic cerebellar slices (rat, human)||Wang |
|FGF2, insulin|
FGF19
SDF1
|KIRREL2+||ATOH1+||KIRREL2+ VZ progenitors by FACS||co-culture with postnatal mouse granule cell progenitors|
<150 days
|Muguruma ,|
Ishida
|FGF2, insulin, TGFβ antagonist||KIRREL2+||ATOH1+||—||co-culture with e18.5 mouse cerebellar progenitors|
<70 days
|Watson |
controlled neural induction (Noggin, SB431542)
directed dorsal hindbrain patterning
selection of neurons
|FGF8b and RA||KIRREL2+||ATOH1+||ATOH1-GFP+ by FACS||cell transplantation in mouse brain||Erceg |
|D0–4: WNT agonist (CHIR-99021)|
D4–12: FGF8b
|KIRREL2+||—||THY+ immature neurons by MACS||co-culture with mouse granule cells||Sundberg |
|D0–4: WNT agonist (CHIR-99021) 1.5 μM|
D4–12: FGF8b (100 ng/ml)
D12–24: BDNF
|EN1/2 GBX2 (D6)||—||(D22)|
Negative selection for GD3 by immunopan-ning
Positive selection for NCAM1+ immature neurons by MACS
|co-culture with mouse cerebellar glia|
< 65 days
and next with granule cells
< 89 days
|Buchholz |
Other approaches aimed to increase the proportion of human ESC-derived cerebellar cells by applying the hindbrain patterning conditions tested for mouse ESCs . Erceg et al. [53, 55] treated human ESCs aggregates with FGF8b and RA, followed by a manual selection of the neuroepithelial cells organized in polarized structures. This procedure yielded, after further differentiation, a heterogeneous population expressing markers of granule cells, Purkinje cells and glial cells. In a more directed differentiation approach, Sundberg et al. used the WNT agonist CHIR99021, FGF8b and FGF2 for pattering the neuroepithelial cells resulted from the parallel neural induction of human ESCs with dual-SMAD inhibition . The patterned progenitors gradually express the hindbrain, cerebellar and Purkinje cell progenitor markers, such as EN1/2, GBX2, PTF1a, KIRREL2 and SKOR2. Between days 24 and 48 of differentiation, markers of GABAergic phenotype and markers of immature Purkinje cells, such as PCP2, were detected. In order to enrich for the Purkinje cell population, instead of the previously used cell sorting for KIRELL2, Sundberg et al. implemented the THY1+ cell selection, a method previously used to purify mouse Purkinje cells from primary cerebellar cultures . The sorted THY1+ cells further matured into Purkinje cells expressing the early Purkinje cells marker PCP2 (or L7). The same team further optimized the directed differentiation protocol , by quantifying the effect of patterning molecules on directing the cerebellar cell phenotypes. They found that the combination of the GSK3 inhibitor CHIR99021 (1.5 μM) for 4 days with FGF8b (100 ng/ml) between days 5 and 12 of differentiation generated the highest proportion of Purkinje cell progenitors. From days 12 to 24, neural cell expressing the cerebellar marker KIRREL2 gave rise to increasing numbers of adjacently located cells expressing Purkinje cell markers. As early as day 35 of differentiation, subpopulations of iPSC-derived cells expressed markers of the primary cerebellar progenitor cells. The postmitotic Purkinje cell marker PCP2 was observed starting from day 18 onward. Flow cytometry analysis showed that ∼23% of cells expressed PCP2 at day 24 of differentiation. A changing element of this protocol was the selection of the immature human PSC-derived Purkinje cells in two steps, a negative selection by GD3 immunopanning and a positive selection by magnetic cell sorting (MACS) with NCAM antibodies .
As for the mouse cerebellar neurons, the conditions used for the
3. Strategies for the optimization of the human PSC-derived cerebellar cultures
Even though the reported protocols have advanced in the generation of cerebellar neuron from human PSCs, they still need a lot of optimization in order to generate homogeneous population of cerebellar neurons in 2D cultures or cerebellar tissue-like aggregated in 3D cultures. Looking at the previous optimizated protocols for generating other neuronal populations, such as the midbrain neurons, the cortical neurons or the cortical organoids, it is relevant to follow again the steps which were gradually applied in order to achieve the efficiency and complexity they offer today (reviewed in [3, 4]). Following this aim, here the development principles of the cerebellar neurons are overviewed, from progenitor specification to neuronal assembles, followed by an outlook of how these principles could be applied for the optimization of the protocols generating cerebellar neurons from human PSCs.
During early embryo development, the human neural tube is formed by the folding of a sheet of neuroepithelium and is progressively closed and regionalized under the control of temporally and spatially coordinated gradients of morphogens secreted by organizer centers. At the end of the neurula stage, corresponding to embryonic day (E) 28, the neural tube is entirely closed and contains, from anterior to posterior, the three primary brain vesicles (forebrain, midbrain and hindbrain) and the spinal cord. Soon after the definition of the midbrain-hindbrain boundary (MHB), cerebellum starts to form at the most anterior and dorsal hindbrain territory. In humans, the cerebellar development is highly protracted, extending from E30 to the end of the second postnatal year. In mice, cerebellum almost completes over a period of around one month, starting from embryonic day (e) 9 and including the first three postnatal weeks (reviewed in [15, 58, 59, 60] (Figure 2). However, as for the whole brain, the mechanisms of cell differentiation and histogenesis in cerebellum are mainly conserved in mammals. While the development of the mouse cerebellum was intensively studied [15, 30, 32, 33, 34, 58, 61, 62, 63, 64, 65], the embryonic and fetal stages in human cerebellar development were only recently described in details [13, 16, 59, 60]. Notably, as for the other parts of the human brain, the embryonic and fetal stages of development are not available for cellular and functional studies, and their histological and clinical images represent only snapshots in time for one individual. Conversely, developmental time-course experiments in mice can be conducted on multiple mice of identical genotypes. These studies revealed that the ontogenesis of all neurons and glial cells in the nervous system, including the ones in the cerebellum, follows the same steps of (1) patterning and specification of the progenitor cells, (2) neurogenesis/gliogenesis and (3) migration, histogenesis, formation of the neuronal circuits and neuronal maturation (reviewed in [15, 27, 58, 61, 66, 67]). However, in contrast to other CNS areas, including the cerebral cortex, in which gliogenesis follows neurogenesis [68, 69], glia generation in cerebellum parallels or precedes the long-lasting generation of the granule cells and inhibitory neurons [15, 30, 32, 65, 68]. Even though the main developmental programs are conserved from mice to humans, some important specie-specific differences responsible for the expansion of the human cerebellum have been recently identified [59, 60]. In the following brief presentation, the main morphological, cellular and molecular events in mouse are complemented with the available information in human.
3.1 Patterning and specification of the cerebellar progenitor cells
Several studies in mouse showed that all cerebellar neurons and glial cells originate from the hindbrain region corresponding to the dorsal (or alar) part (or plate) of the first rhombomere (r1) [30, 70]. The anterior limit of the cerebellum is defined by the MHB, named also isthmus, where an organizer center, named the isthmus organizer (IsO), forms early in development and has a major role in the anterior/posterior (A/P) patterning of the midbrain and hindbrain. IsO formation is preceded by a series of pattering events that start in the forming neural plate, where two transcription factors, Otx2 (Orthodenticle Homeobox 2) and Gbx2 (Gastrulation Brain Homeobox 2) define the primitive anterior and posterior domains, respectively . They are further co-expressed in early IsO and then differentially express in the midbrain and hindbrain domains . WNT signaling has a main role in the A/P patterning of the neural tube but also in IsO induction, showed by the loss of IsO in WNT1 homozygous mutants (; reviewed in ). Shortly after the primary brain vesicles formation, Fibroblast Growth Factor 8 (FGF8) secreted by IsO patterns the adjacent territories [71, 75, 76, 77, 78, 79, 80]. Additional A/P patterning by extra-neurally secreted retinoic acid (RA) defines the metencephalic and myelencephalic secondary hindbrain vesicles. The metencephalon expresses the homeobox gene
Between e9 and e12.5 and, the cerebellar neuroepithelium undergoes morphological changes: the midline remains as a single cell layer and forms the roof plate, while each lateral part forms two primary proliferative zones, known as the origins of the neural populations in the mouse cerebellum: the cerebellar ventricular zone (VZ) and rhombic lip (RL) (Figures 2 and 3) . By e10, the roof plate becomes the second cerebellar organizer center and secretes factors belonging (TGF)-β family, such as the bone morphogenetic proteins (BMPs), the most important dorsalizing factors in the cerebellum, and gradually transforms into the choroid plexus epithelium (ChPe). By e12.5, ChPe additionally produces SHH. Genetic fate mapping proved that the morphogens secreted by IsO, roof plate and floor plate define the cerebellar domains which, in addition to the hindbrain restricted expression of Gbx2, show the differential expression of two basic-helix–loop–helix (bHLH) transcription factors: Pancreatic transcription factor 1 (Ptf1) specifies the VZ domain and Atonal homolog 1 (Atoh1, also called Math1), specifies the RL progenitor domain [15, 58, 61, 84, 85].
Each cerebellar progenitor zone forms subdomains with their own spatial and temporal identities, which produce specific neuronal subtypes. VZ-derived progenitors give rise to all GABAergic neurons and glial cells of the cerebellum. VZ-derived neurogenesis starts at e10.5 and continues untill e17 in mouse. Before the neurogenesis starts (~e9), the VZ progenitor domain corresponds to the neuroepithelial cells localized in the VZ of the r1 neural tube (Figure 3). Most of the earliest Ptf1a + progenitors upregulate
The neuroepithelium of the RL gives rise to all glutamatergic neurons in the cerebellum (Figures 2 and 3), but also to extracerebellar neurons such as the pontine neurons [66, 70]. RL Atoh1+ neuroepithelial cells situated between the roof plate and the VZ domain start their proliferation after the adjacent VZ progenitors (~e10). Also the RL neuroepithelial cells gradually acquire a radial glial phenotype and are patterned in subdomains, which express the paired box gene
The cerebellar proliferative zones in human embryos have been only recently investigated. The human cerebellar VZ (gradually forming the SVZvz) undergoes massive expansion which covers the second month (E30–56), afterwards extinguishing its proliferative potential and remaining as a single cell layer. Conversely, the RL germinal zone remains small during the peak expansion of the VZ progenitors, but starts a significant expansion at around gestational week (GW) 11, when it forms the SVZRL, which persists long after birth [59, 60].
In humans, all Purkinje cells are generated before the 8th GW, which places them among the earliest-born central neurons. They start to migrate at E44 outwards from the VZ along radial glial projections to the pial surface. A broad Purkinje cell multilayer extending in the mantle zone is evident between the GW 10 and 13 GW, while a monolayer distribution is achieved by GW 20–24 (Figure 2). Human Purkinje cells start to develop their characteristic extensive and flattened dendritic arbors and long axons in the early fetal stages, their final maturation being achieved postnatally, in a 6-fold longer period than in mice [59, 60, 90, 91].
Contrary to the Purkinje cells, which are postmitotic already into the cerebellar SVZvz, the Gbx1+ progenitors expressing the paired homeobox gene
Glutamatergic cerebellar neurons (excitatory neurons in DCN, granule cells and unipolar brush cells) originate from different subdomains of the RL, in different waves (Figures 2 and 3). The first cells leaving from the RL are the newborn excitatory neurons in DCN. Next, the granule cell progenitors migrate in waves out of the RL, where they continue the proliferation. In the first wave (e10.5–12.5), discrete subpopulations of rostrally situated Atoh1+ cells gradually upregulate
The second wave covers middle to late embryonic stages, when Pax6+ granule cell progenitors leave the RL, migrate out toward the pial surface and undergo a prolonged expansion in a secondary germinal zone, or a second transit amplifying center, named the external granular layer (EGL) . Granule cell progenitors retain the expression of
Unipolar brush cell differentiation parallels the granule cell progenitor waves (Figure 2). Unipolar brush cells are born starting with e13.4, while continuing to p0–1. Progenitors of the unipolar brush cells express Wnt1 early in development (e10.5–13.5), but this expression is downregulated before they migrate from the RL. The newly generated neurons remain in the RL for an additional 1–2 days, after which they exit RL and migrate dorsally through the white matter to their final destination. Most unipolar brush cells reach the IGL by p10, several days before granule cell neurogenesis is complete. Their final maturation occurs between p2 and p28, which seems to coincide with the establishment of the first synaptic contacts with external mossy fibers [15, 27, 88].
3.2 Coordinated formation of the cerebellar circuits
The successful construction of the neuronal circuitry relies on the coordinated generation of functionally opposed neurons. Accordingly, the differentiation programs of cerebellar excitatory and inhibitory neurons are interdependent and defined as the coordinated integration of the VZ and RL-derived lineages in local circuits, in both the cortex and DCN. For the DCN, the cell fate of the excitatory neurons appears determined at the RL, in a temporal pattern, while the interneuron progenitors migrate, differentiate and integrate in the NTZ after receiving local signals from the excitatory neurons.
Purkinje cells have a remarkable capacity to regulate developmental events by sending SHH signals bi-directionally. Starting at e16.5 and continuing throughout adulthood,
In the third trimester and postnatally, human cerebellum undergoes its major growth, primarily due to the prolonged expansion of the granule cell progenitors. By 10–11 GW, streams of cells which form the external GL (EGL) were observed along the pial surface connecting to the RL. Due to extensive EGL proliferation, human cerebellum increases 5 fold in size between GW 24–40 . Differentiation and maturation of the human cerebellar neurons progress mainly as in the mouse, but there are some species-specific features. Foliation correlates with EGL proliferation and increases dramatically between GW 20–32, as the cerebellum rapidly increases in size and volume. The formation of the Purkinje cell monolayer coincides with the peak of EGL proliferation [89, 90]. The human cerebellar cortex still has a prominent EGL at birth. EGL gradually decreases in thickness as a result of migration of granule cells into the internal GL. By the end of the second postnatal year, EGL is depleted while the thickness of the molecular layer and the length of the PL increase, concomitant with the increasing cerebellar volume [89, 90]. To date, there are few studies about the development of the human interneurons, both inhibitory and excitatory, which represent a minority comparing to the granule cells, but with a major role in the maturation of Purkinje cells and circuit formation [15, 34, 58, 91, 101].
In addition, the single-cell sequencing techniques have been applied for analyzing different stages of mouse cerebellar development [62, 102]. Carter et al. performed single-cell RNA-sequencing and unbiased classification of around 40 thousand murine cerebellar cells from eight embryonic samples (at e10-e17) and 4 postnatal samples (at p0, p4, p7 and p10). Such approach allows for a more comprehensive detailing of the transcriptional and cellular heterogeneity among lineages of interest and can provide a valuable resource for answering further questions related to cerebellar development and diseases. In a similar study, Peng et al. analyzed around 20 thousand cells from mouse postnatal cerebella and looked in addition to the dynamics of interneuron differentiation but also mitochondrial markers and ataxia risk genes. In a complementary approach, gene expression in the postnatal stages of mouse cerebellar development were analyzed by Buchholtz et al. in Purkinje cell populations selected from mice expressing a
3.3 From development of the cerebellum to the optimization of the human PSC differentiation protocols
There are several steps to be considered for the cerebellar protocols, which practically cover all the developmental stages: from neural induction and dorsal hindbrain patterning to the patterning and proliferation of the VZ-like and RL-like progenitors, to the neurogenesis of the selected progenitors, and lastly to the maturation of the neurons and the formation of the neuronal circuits. Are the previously used neural induction and early patterning conditions (in both 2D and 3D approaches) optimal for the generation of progenitors similar to the ones in the dorsal r1 in the neurula stage, which represent the origin of the neurons making the cerebellum? Are the previously used conditions optimal for the uniform generation of early VZ and RL progenitors? Which factors and what timing would be necessary for a uniform patterning towards VZ or RL subpopulations? Which conditions would be efficient to produce a uniform neurogenesis from different progenitors? What would the defined conditions for the neuronal maturation be? How can the neuronal maturation be faster? How can other neuronal subtypes, such as the interneurons in the cerebellar cortex and in the DCN, be generated uniformly and efficiently?
Some recent strategies were successful for the optimization of the protocols for the cerebral neurons and cerebral organoids. It remains to be checked whether these strategies can be extrapolated for the cerebellar cultures. Again, the solutions may come from the development principles. The main traiectories that could be followed from the human iPSC to the neuronal cell types contained in the cerebellum are outlooked in Figure 4 and detailed in the following paragraphs.
Some previous protocols used FGF2 for amplifying the neuroepithelial population and showed that, although an anterior phenotype is kept for a few passages in the presence of FGF2, longer exposure gradually patterns human progenitors toward midbrain and hindbrain fates [105, 107, 108]. FGF2 was used by Muguruma et al. for inducing a brought midbrain-hindbrain patterning, including the IsO-like cells, in 3D spontaneously differentiating human PSCs in serum-free medium, for a time approximating the MHB formation in human embryos. However, the reproducibility of this protocol is limited and the efficiency of the neural induction and pattering was not investigated, many cells in the 3D clusters could present a more anterior phenotype (and maybe non-neural phenotypes). Watson et al. proposed the parallel neural induction and hindbrain patterning by using FGF2 in combination with the SMAD inhibitor SB431542 for around 20 days. Even though it showed an increased expression in hindbrain and cerebellar markers, yet the efficiency and the selectivity of this approach was not reported.
The implementation of WNT signaling was shown to increase the midbrain and hindbrain patterning and reduce the spontaneous forebrain patterning in human PSC-derived neural cultures [28, 41, 54, 109, 110]. In Kirkeby et al. and Kirkeby et al. , neural induction with dual-SMAD inhibition and pattering were applied in parallel for 9 days. The GSK3 inhibitor CHIR99021 was used at 1–2 μM concentration for patterning the anterior r1 fate. Following this protocol with some modifications, Sundberg et al. applied the neural induction and hindbrain patterning by WNT in the same time, for 12 days, with noggin and 1.7 μM CHIR99021, while in a following study coming from the same group , neural induction and patterning with CHIR99021 1.5 μM was applied for only 4 days. In both studies, FGF8b (100 ng/ml) was added from day 4 to day 12 of differentiation, while FGF2 applied at day 10–12 in Sundberg et al. was excluded in the next protocol . However, the resulted cell populations in both studies were not directly phenotyped, but after 16 or 32 days of differentiation, when they contained KIRREL2+ or THY1+ cells, respectively, which were selected by FACS. Further optimization for neural induction and hindbrain patterning requires a deeper investigation, including negative markers for forebrain, midbrain, hindbrain (excepting the r1), and ventral markers (especially for the r1). The dorsal r1 cells should concomitantly and uniformly express GBX2 and EN1/2. Obviously, reporter lines for different genes expressed solely in r1, such as HOXA1, would be very useful tools.
In addition, a study using human hindbrain tissue from embryos at GW 5–7 showed that the hindbrain neuroepithelial cells were stably expandable in FGF2 and EGF conditions, but the short treatment with FGF8 and WNT (for 1 passage) hugely increased the expression of GBX2, EN1 and EN2 . A deeper investigation of the human embryonic dorsal hindbrain tissue could provide hints for the optimization of the human PSC differentiation protocol toward cerebellar cells. The human embryonic hindbrain neuroepithelial cells can be further patterned
Another approach can come for the optimization of long-term cultures of cerebellar organoid, in line with the extensively investigated field of cerebral organoids . As shown in different previous reports, functional synaptic connections are necessary for maturation and activity of the human PSC-derived neurons, which include glia and target neurons, all of these could be provided in the same cerebellar organoid.
Again, one limitation for most of the human PSC-derived neurons, as for the human neurons in general, is the lack of transcriptomic signatures, to rigorously identify specific types of neurons and to compare their development across species. A recent Metagene projection analysis of global gene expression patterns revealed that differentiating human PSC-derived Purkinje cells share classical and developmental gene expression signatures with developing mouse Purkinje cells. Remarkably, it revealed that the human PSC-derived Purkinje cells matured in co-culture for around two months are closest to late juvenile (p21) mouse Purkinje cells, suggesting that they are relatively mature. Gene expression profiling also identified human-specific genes in human PSC-derived Purkinje cells. Protein expression for one of these human-specific genes CD40LG, a tumor necrosis factor superfamily member, was confirmed in native human cerebellar tissue, arguing for the bona-fide nature of the human PSC-derived cerebellar neurons . Obviously, the routine applications of the single-cell transcriptomics into the optimization steps of the human PSC-derived cerebellar differentiation protocols will hugely contribute to the progress in the field.
4. iPSC-derived models for cerebellar ataxias
The iPSC technology together with the cerebellar differentiation protocols offer the opportunity to indirectly generate and to directly study the most affected cells in patients with cerebellar ataxias, the cerebellar neurons. As schematically presented in Figure 5, somatic cells such as skin fibroblasts or white blood cells obtained from patients are reprogrammed into iPSCs, which can be theoretically differentiated into any type of neurons. Ideally, the neuronal differentiation should address the most affected subpopulation in each disease, by following the existing protocols or optimized protocols in the desired direction (using development principles and combining efficient selection methods). Remarkably, for the inherited ataxias, the patient iPSC-derived neurons express the disease mutation in the authentic genetic background and cellular environment, which is not the case in the animal models.
The neuropathological events in hereditary cerebellar ataxias affect both cerebellar and extracerebellar territories. Nevertheless, degeneration and ultimate loss of cerebellar neurons is a neuropathological hallmark in cerebellar ataxias. The affected cerebellar neurons and the responsible genes for several cerebellar ataxias are presented in Table 2. Spinocerebellar ataxias (SCAs) are a family of over 40 currently described late-onset dominant diseases, manifesting clinically at middle age and gradually progressing with neurodegeneration in cerebellum and other CNS areas, [136, 137, 138, 139] while in other genetic ataxias, such as the autosomal recessive Friedreich ataxia (FRDA) and ataxia-telangiectasia (AT), the disease manifests a lot earlier and, in addition to the nervous system, extraneural territories are affected [137, 138]. FRDA is considered a multi-systemic condition, including central and peripheral neuropathies, diabetes and cardiomyopathy [140, 141].
|Ataxia Type||Affected cerebellar neurons||Gene, mutation&location||Affected protein||Human iPSC-derived neurons||Human iPSC-derived models References|
|PCs++, DCN++||Ataxin-1||—||[114, 115]|
|PCs+++, DCN+++||Ataxin-2||CNS||[116, 117]|
|PCs+, DCN+++||Ataxin-3||CNS||[117, 118, 119, 120, 121, 122]|
|PCs+++, GCs+, DCN++||α1A & α1ACT||cerebellar||[51, 107]|
|PCs+++, DCN+++||Ataxin-7||CNS||[123, 124]|
|PCs+++, DCN+++||PP2R2B||—|||
|PCs+++, DCN+++||TBP||—||—|
|—||—||—|
|PCs+++, DCN+++||—||CNS|||
|PCs+++, DCN+++||Cav3.1||cerebellar|||
|PCs+, DCN+++||Frataxin||PNS, CNS||[128, 129, 130, 131, 132, 133, 134]|
|PCs+++, GCs+++||ATM||cerebellar|||
In cerebellum, SCA1, SCA3 and FRDA involve mainly the DCN, especially the dentate nucleus, but also extracerebellar territories such as the Clarke’s column, which present with severe neuronal loss (reviewed in ). SCA2 predominantly affects the pontine nuclei, while the Purkinje cells and DCN seem to be secondarily affected. SCA31 is relatively restricted to the Purkinje cells. Although Purkinje cells are predominantly involved in SCA6, degeneration is evident also in the dentate nucleus and granule cells. Therefore, patients with SCA6 show more severe ataxia than those with SCA31. Several SCA subtypes have CAG repeat expansions in the coding region of different genes (http://www.scabase.eu/; [143, 144, 145, 146]), resulting in PolyQ elongations in the respective proteins, the elongation size being correlated with the intensity of clinical manifestations. In other SCAs (SCA12, SCA31 and SCA36) or non-SCA monogenic ataxias, such as FRDA, the repeat expansion is intronic, but also in these diseases the cerebellar dysfunction is correlated with the elongation size .
Modeling these human genetic disorders in mice has reproduced to a certain extend the neuropathological aspects and has provided some insights into disease mechanisms. Many disease mechanisms that have been explored in mouse models are expected to be recapitulated in patient iPSC-derived neurons. However, some ataxias could not be modeled in mice using the same mutation as in the patients, suggestion that the human-specific environment is essential for the disease to develop. Additional mechanistic understanding of the network of events produced by the mutation is crucial for the development of effective therapies, as none of the cerebellar ataxias is yet curable, treatable or preventable [143, 145, 147, 148, 149].
For modeling cerebellar ataxias, the iPSC-based models present three main advantages. First, most of cerebellar ataxias are monogenic diseases. Second, neurons bearing the mutation, which are not directly available from patients, can be generated
However, as presented in Table 2, relatively few studies have succeeded in generating iPSC-based models for cerebellar ataxias. An additional important question for the iPSC-based models is to what extend the mutated gene is expressed in the neurons generated
A handful of studies published to date addressed iPSC models of PolyQ SCAs (such as SCA1, 2, 3, 6, 7 and 12), non-PolyQ SCAs (such as SCA36 and 42), and other ataxias (such as FRDA and A-T). Most of the iPSC-based models used a generic differentiation towards the neural lineage, as opposed to the generation of specific neuronal subtypes, and very few characterized the neuronal phenotypes. The only reported iPSC-derived models addressing the cerebellar neurons were for SCA6 , SCA42 and A-T .
For SCA1 and SCA12, only the generation of patient-derived iPSCs were until now reported [114, 115, 119, 125]. Several other SCA models have already addressed the neural phenotypes. SCA2 was modeled by Xia et al. and by Chuang et al. using patient iPSC-derived neural progenitors and central neurons. No cerebellar protocol has yet addressed SCA2, in which both Purkinje cells PCs and DCN neurons are affected. Whereas patient and control fibroblasts showed comparable levels of expression of the disease-causing protein Ataxin-2, its expression was decreased in patient iPSC-derived neural stem cells, which survived shorter in cell culture. Chuang et al. reported that SCA2 neurons exhibited a glutamate-dependent disease phenotype, which are suppressed by anti-glutamate drugs and a calcium stabilizer treatment.
One of the first studies using the generation of neurons from patient iPSCs addressed to SCA3, also called Machado-Joseph disease (MJD) . In this model, neuronal excitation by glutamate promoted an increase in intracellular calcium concentration and proteolysis of Ataxin-3, triggering its aggregation—a hallmark of the disease in patients. This intraneuronal aggregation, (which was also found to depend on sodium and potassium channel function, as well as on ionotropic and voltage-gated calcium channel function), was abolished by calpain inhibition, pointing to a key role of this protease in Ataxin-3 cleavage. Furthermore, intracellular aggregations were not observed in patient iPSCs, fibroblasts or iPSC-derived glial cells, providing a clue for the neuron-specific phenotype observed in SCA3 patients. Hansen et al. differentiated the SCA3 patient-derived iPSCs further into hindbrain neurons that expressed
SCA6 is a very interesting case, first, by being one of the three diseases in which patient iPSC-derived cerebellar neurons were generated to date, and second, because of the bicistronic nature of the affected gene,
For SCA7, in which cerebellar and retinal cells are degenerated , Luo et al. reported the generation of iPSCs and neurons from a SCA7 patient, but did not characterize the neuronal phenotype and the disease phenotype. Ward et al. generated SCA7 patient-derived iPSCs and their isogenic lines transduced with either normal or expanded ATXN7. They reported that SCA7 iPSC-derived neural progenitors exhibit altered metabolism and mitochondrial dysfunction.
SCA36 and SCA42 are non PolyQ autosomal dominant diseases, affecting the cerebellar neurons and other neurons. Matsuzono et al. generated motor neurons from the patient-derived iPSCs and recapitulated an increase in RNA foci-positive cells that can be markedly suppressed by treatment of antisense oligonucleotide. SCA42 is caused by a mutation in
For the FRDA, a pioneering work revealed that abnormal expansion of GAA repeats led to upregulation of the DNA mismatch repair protein MSH2 in FRDA patient-derived iPSCs . They reported that the functional inhibition of
For the A-T is caused by several mutations in the
5. Strategies for optimizing the neuronal models of cerebellar ataxias
Of particular interest in future research in the cerebellar ataxias is the comparison between affected and unaffected neuronal types, in order to identify particular characteristics that render specific neuronal populations vulnerable to a genetic insult which is ubiquitously presented. One of the most crucial needs is to establish a reliable and consistent disease phenotype in a relevant cell population, and those cell types to be generated in relatively large quantities
Differentiation into specific and mature neurons that are the disease targets, such as Purkinje cells for several SCAs, or solely DCN neurons for some ataxias, or both of them for the most of SCAs (Table 2), will enable the construction of more reliable disease models . However, the suitability of iPSC-derived neurons for modeling late-onset conditions remains controversial, particularly given the immature, fetal-like phenotypes of the neurons generated from these cells.
Remarkably, in contrast to the immature morphology observed for human PSC-derived Purkinje cells, a recent bioinformatics analysis of their gene expression and developing showed that they most closely resembled late juvenile p21 mouse expression mouse Purkinje cells, when most of the cerebellar disease phenotypes in several animal models start to manifest. This finding suggests that the Purkinje cells are among the most mature human PSC-derived central neurons analyzed to date. This approach also underscores the utility of transcriptomic analysis for analyzing the maturation of human PSC-derived neurons and validates the use of hPSC-neurons for modeling cerebellar ataxias.
Still, it is possible that the disease phenotypes of adult-onset conditions, as the most of genetic SCAs are, may never be fully recapitulated under 2D cell culture conditions, even with directed protocols and optimized maturation. Generation of 3D cerebellar-like tissues as the cerebellar organoids may allow to increasing the neuronal maturation
Another way to model the late-onset diseases is the addition of neural stressors, such as reactive oxygen species, pro-inflammatory factors, and toxins or forced aging, as schematically presented in Figure 5. These approaches were already used for modeling several SCAs or other neurologic diseases [153, 155, 156, 157]. However, in an ideal situation, these stressors should only exacerbate the disease phenotype, which can be evident in a good model solely by the expression of the mutation in the disease-relevant cells. Another approach is to genetically manipulate the system for forcing the aging, such as by overexpression of progerin in neural progenitors. By this approach, the disease phenotype is expected to manifest
On the other side, recent evidence from cell and animal models indicates that abnormalities in early Purkinje cell development may contribute to the pathogenesis of the ataxias Purkinje cell developmental abnormalities are clearly evident in a wide range of ataxic mouse mutants, including models of the degenerative SCAs . The observed Purkinje cell developmental defects commonly include impaired dendritic arborization, resulting in synaptic deficits affecting CF and PF connections and ultimately altering Purkinje cell physiology. Similar impairments in Purkinje cell dendritogenesis and synapse formation have been described in mouse models of SCA5, and in cell and mouse models of SCA14, SCA1, SCA3 and SCA5. Given the increasing evidence for Purkinje cell developmental abnormalities in cerebellar ataxias, it seems likely that iPSC-derived models, which are capable of recapitulating early developmental events
Another limitation in the field of modeling cerebellar ataxias is that most of the studies implemeted the production of iPSCs from a few patients. On one hand, addressing to larger patient cohorts may allow to identifying more accurate phenotypes. On the other hand, for investigating the pathological function of a mutation, the ideal situation is to compare the cells bearing the mutation with control cells with an identical genetic background. The rapid development of CRISPR/Cas9-mediated genome editing is likely to result in significant advances in the field, allowing the correction of disease-causing mutations into iPSCs, which can then be used to create paired isogenic lines to produce better disease models in which far less patient-derived cell lines will be necessary . This was already performed even for the ‘difficult to correct’ elongations, like in SCA3, SCA7, it is expected in the near future to constitute ‘the norm’ for all iPSC-derived disease models.
The establishment of efficient, reproducible cellular models of cerebellar dysfunction and degeneration will be important not only in elucidating the molecular basis of these diseases, but also in the development of effective therapies. Establishment of special cell cultures, such as Purkinje cells from patients with cerebellar ataxia, provides opportunities to screen for drugs that may correct the observed disease phenotypes. These cell cultures can be combined with stressors capable of eliciting phenotypes in late-onset conditions and genotypic modifiers of disease progression and drug response. In addition, these cerebellar cell cultures may be used for toxicity screens, to assess the effects of novel compounds on relevant cell types, or for differentiation screens, to identify compounds capable of enhancing self-renewal, maturation or survival of specific cerebellar cells (Figure 5).
6. Final remarks
Recent technologies for producing iPSCs from patients combined with the differentiation of PSCs into neural cells and the self-organizing 3D neural tissues have provided a new way to experimentally investigate the developmental and disease mechanisms of the human brain. While several challenges have hindered the generation of cerebellar neurons
However, human PSC-based models offer distinct advantages for the study of cerebellar ataxias. Cerebellar neuronal models are likely to provide valuable insights into the selective vulnerability of distinct neuronal subtypes, particularly the Purkinje cells. More directed and/or complex approaches will allow for the generation of accurate, disease-relevant models for the study of the molecular mechanisms underlying cerebellar ataxias, and the development of the long-awaited therapies.
Acknowledgments
This work was supported by Austrian Science Fund (FWF), Project P26886-B19, Austria. | https://www.intechopen.com/online-first/75930 |
This protocol describes how to differentiate human induced pluripotent stem cells (iPSCs) to Purkinje cells. Human iPSCs are first differentiated to Neph3+ Purkinje progenitors. To promote maturation of Purkinje progenitors in vitro, a co-culture system is used to enhance the maturation of Purkinje precursors on rat and human fetal cerebellar slices. Furthermore, Purkinje progenitor cells are injected into the cerebellum of newborn immunodeficient mice to test the differentiation ability in vivo.
It remains a challenge to differentiate human induced pluripotent stem cells (iPSCs) to Purkinje cells. Purkinje neurons are the only output neurons in cerebellum and often afflicted in spinocerebellar ataxias (SCAs) and other medical indications such as ethanol exposure and autoimmune diseases. Obtaining patient-specific Purkinje neurons would offer a valuable tool to model cerebellar diseases in a culture dish for investigating the disease mechanisms of genetic SCAs, and for drug screening and potentially regenerative approaches to replace the damaged Purkinje cells. Fgf8 and Wnt1 are two key regulators of Purkinje cell development (1, 2); however, simply adding Fgf8 and Wnt1 to the culture is not sufficient to direct the specification of iPSCs towards Purkinje lineage (3, 4, 5). Here, we presented a protocol (6, 7), in which insulin and bFGF were added to iPSC culture in a time-sensitive manner, to activate a self-sustaining pathway to produce endogenous Fgf8 and Wnt1, and to drive the differentiation of iPSCs to Purkinje progenitors/precursors. Furthermore, the progenitors were differentiated to Purkinje neurons by co-culturing with cerebellar organotypic slices.
Reagent Setup
iPSC preparation
Differentiation of human iPSCs to Purkinje progenitors
Co-culture of human Purkinje progenitors with cerebellum slices
General preparation:
Slice preparation:
Human fetal cerebellum slice culture
Sorting of Neph3+ cells
Microinjection of human Purkinje progenitors into cerebellum of SCID mice
The study was supported by a grant from Sanofi R&D, and by the National Basic Research Program of China (2011CB965103 and 2012CBA01307), the National Natural Science Foundation of China (81422014,31340075, 31070946, 81141014), and Beijing Natural Science Foundation (5142005).
Figure 1: Instruments and slices.
Figure 2: Rosettes with Neph3 staining at Day 20.
Bar=50μm.
Figure 3: Purkinje cells in co-culture with rat cerebellum slices.
Differentiation of human induced pluripotent stem cells to mature functional Purkinje neurons, Shuyan Wang, Bin Wang, Na Pan, Linlin Fu, Chaodong Wang, Gongru Song, Jing An, Zhongfeng Liu, Wanwan Zhu, Yunqian Guan, Zhi-Qing David Xu, Piu Chan, Zhiguo Chen, and Y. Alex Zhang, Scientific Reports 5 () 18/03/2015 doi:10.1038/srep09232
Shuyan Wang, Bin Wang & Zhiguo Chen, Zhiguo Chen's Lab, Xuanwu Hospital Capital Medical University
Correspondence to: Zhiguo Chen ([email protected])
Source: Protocol Exchange (2015) doi:10.1038/protex.2015.035. Originally published online 18 April 2015. | https://protocols.scienceexchange.com/protocols/differentiation-of-human-induced-pluripotent-stem-cells-to-purkinje-neurons |
Oligodendrocytes are supporting glial cells that ensure the metabolism and homeostasis of neurons with specific synaptic axoglial interactions in the central nervous system. These require key myelinating glial trophic signals important for growth and metabolism. Thyroid hormone (TH) is one such trophic signal that regulates oligodendrocyte maturation, myelination, and oligodendroglial synaptic dynamics via either genomic or nongenomic pathways. The intracellular and extracellular transport of TH is facilitated by a specific transmembrane transporter known as the monocarboxylate transporter 8 (MCT8). Dysfunction of the MCT8 due to mutation, inhibition, or downregulation during brain development leads to inherited hypomyelination, which manifests as psychomotor retardation in the X-linked inherited Allan-Herndon-Dudley syndrome (AHDS). In particular, oligodendroglial-specific MCT8 deficiency may restrict the intracellular T3 availability, culminating in deficient metabolic communication between the oligodendrocytes and the neurons they ensheath, potentially promulgating neurodegenerative adult diseases such as multiple sclerosis (MS). Based on the therapeutic effects exhibited by TH in various preclinical studies, particularly related to its remyelinating potential, TH has now entered the initial stages of a clinical trial to test the therapeutic efficacy in relapsing-remitting MS patients (NCT02506751). However, TH analogs, such as DITPA or Triac, may well serve as future therapeutic options to rescue mature oligodendrocytes and/or promote oligodendrocyte precursor cell differentiation in an environment of MCT8 deficiency within the CNS. This review outlines the therapeutic strategies to overcome the differentiation blockade of oligodendrocyte precursors and maintain mature axoglial interactions in TH-deprived conditions. | https://research.monash.edu/en/publications/oligodendroglial-lineage-cells-in-thyroid-hormone-deprived-condit |
Pluripotent stem cells (PSCs) are a valuable tool for interrogating development, disease modelling, drug discovery and transplantation. Despite the burgeoned capability to fate restrict human PSCs to specific neural lineages, comparative protocols for mouse PSCs have not similarly advanced. Mouse protocols fail to recapitulate neural development, consequently yielding highly heterogeneous populations, yet mouse PSCs remain a valuable scientific tool as differentiation is rapid, cost effective and an extensive repertoire of transgenic lines provides an invaluable resource for understanding biology. Here we developed protocols for neural fate restriction of mouse PSCs, using knowledge of embryonic development and recent progress with human equivalents. These methodologies rely upon naïve ground-state PSCs temporarily transitioning through LIF-responsive stage prior to neural induction and rapid exposure to regional morphogens. Neural subtypes generated included those of the dorsal forebrain, ventral forebrain, ventral midbrain and hindbrain. This rapid specification, without feeder layers or embryoid-body formation, resulted in high proportions of correctly specified progenitors and neurons with robust reproducibility. These generated neural progenitors/neurons will provide a valuable resource to further understand development, as well disorders affecting specific neuronal subpopulations.
Introduction
Murine pluripotent stem cells (mPSCs) are a powerful research tool to study development, establish in vitro disease models, and facilitate advances in transplantation and drug screens, targeted at neural repair. Since their initial isolation more than three decades ago1, mouse embryonic stem cells (mESCs) have been widely used for these aforementioned purposes, however limitations associated with variability and heterogeneity of differentiation protocols have hampered progress.
A key limitation inherrant in early protocols is the reliance on co-culture with stromal cell lines to promote neural induction2,3. Although co-culture protocols demonstrate moderate neuralization, variable differentiation efficiencies are unavoidable due to batch-to-batch variation in feeder secreted factors including immunogenic proteins, growth factors and extracellular matrix ligands. While feeder-free neural differentiation alternatives have been developed, they rely on the spontaneous differentiation properties of mESCs in 3-dimensional embyoid bodies (EBs) or 2D cultures, which are inherently variable4,5,6. Specifically, spontaneous differentiation results in contamination with non-neuronal derivatives and/or generates highly heterogeneous cultures. Thus, new protocols to derive specific neural populations under defined culture conditions are warranted.
Studies of fetal CNS development have identified key morphogens involved in the formation of specific brain regions, such as the ventralizing factor sonic hedgehog (SHH) and caudalizing proteins fibroblast growth factor 8 (FGF8) and Wnt17. Significant advancements in the differentiation of human PSCs into restricted neural lineages has come with the early administration of these factors concurrently during neuralization8,9,10,11, resulting in highly homogenous neural populations that accurately reflect not only CNS regions but also lineage subtypes.
A final consideration is the pluripotent state of the cells at the commencement of differentiation. Two states of mESCs have been described– (i) Serum and LIF (S/L)-dependent ESCs which reflect a more unstable pluripotent state of the pre-implantation blastocyst’s inner cell mass (and most widely employed in ESC studies), and (ii) naïve mESCs, also refered to as ‘ground-state’ stem cells, that represent a stable pre-implantation stage12. Naïve state cells are obtained by culturing mESCs in a defined medium (2i) that contains MEK and GSK3ß inhibitors to maintain this ground state and block differentiation signals13. These naïve mESCs have numerous advantages over S/L-responsive mESC that are cultured in batch-variable serum-based medium, including greater morphological homogeneity, enriched expression of pluripotential transcription factors and reduced levels of lineage -specific transcripts14,15,16,17. In this regard, naïve mESC are likely to improve the efficiency and reproducibility of differentiation.
For the first time, we successfully induced neural differentiation of naïve mESCs and compared derivatives to S/L-dependent mESC counterparts. Subsequently, we utilized naïve pluripotent stem cells (PSCs), and advances in hPSC differentiation techniques, to establish methods to derive four region-specific (dorsal forebrain, ventral forebrain, ventral midbrain and hindbrain) neural populations. Of these regions, the ventral forebrain was further divided into rostral and caudal subpopulation, inclusive of ganglionic eminence and hypothalamic-like progenitors, respectively. These (5) novel protocols importantly rely on the early patterning and specification of PSCs using targeted morphogens and small signaling molecules delivered within the first days of neural induction. Extensive cytochemical, cell sorting and transcriptional profiling of the cells confirmed regional specification of resultant progenitors and neurons.
Experimental Procedures
Maintenance of PSCs
The mouse ESC lines, E14TG2a (ATCC, USA), and our generated reporter lines (Lmx1a-eGFP and Pitx3-eGFP,18,19, as well as the fibroblast-derived induced pluripotent stem cells (IPSCs) line, M2RttA/OKSM, were maintained undifferentiated in either: (i) a basic leukemia inhibitory factor (LIF) medium including serum, or (ii) serum-free 2i medium. Basic LIF medium consisted of Knockout DMEM, 15% fetal bovine serum (FBS, Sigma-Aldrich), 1x penicillin/streptomycin (P/S), 1x glutamax, 1x non-essential amino acids (NEAA), 0.11 mM beta-mercaptoethanol and 2000 IU/ml LIF (Millipore). 2i medium consisted of DMEM/F12, 1x N2 supplement, 1x B27–vitamin A, 1x P/S, 1x glutamax, 1x NEAA, 0.11 mM beta-mercaptoethanol, 2000 IU/ml LIF, 1 uM MEK inhibitor PD0325901 (Stemgent) and 3 uM GSK3 inhibitor CHIR9902 (Stemgent). All PSC were maintained on gelatinized 35 mm dishes under feeder-free conditions. Cells were passaged every second day using accutase (STEMCELL Technologies), for ground state PSC, and 0.025% trypsin-EDTA for S/L-dependent PSC. All reagents were purchased from GIBCO unless stated otherwise.
PSC differentiation
Naïve PSCs or S/L-dependent PSCs (subsequently refered to as S/L PSCs) were seeded on 0.1% (v/v) gelatinized 48 well plates at a density of 5.0 × 103 cells per well and incubated overnight in LIF basic medium prior to differentiation. A combination of serum replacement medium (SRM) and N2 medium, supplemented with 200 nM LDN193189 (Tocris), were used for early patterning stage under gradient conditions, (day0: 100% SRM; day1: 75%SRM:25%N2; day2: 50%SRM:50%N2; day3: 25%SRM:75%N2). SRM consisted of Knockout DMEM, 1x P/S, 1x glutamax, 1x NEAA, 0.11 mM beta-mercaptoethanol and 15% knockout serum. N2 medium components included DMEM/F12, 1x P/S, 1x glutamax, 1x NEAA, 0.11 mM beta-mercaptoethanol, 1x ITS-A supplement and 1x N2 supplement. This SRM/N2 gradient media was subsequently refered to as ‘Patterning media’. For medial ganglionic eminence differentiation, cells were additionally cultured in a ‘ventralizing media’ from day 6–10, consisting of DMEM/F12, 1x P/S, 1x glutamax, 1x NEAA, 0.11 mM beta-mercaptoethanol, 1x ITS-A, 1x N2 supplement and 1x B27+ vitaminA supplement. Finally, N2B27 medium was utilized for the maturation stage. ‘Maturation medium’ consisted of 1:1 mixture of DMEM/F12 and Neurobasal medium, 1x P/S, 1x glutamax, 1x NEAA, 0.11 mM beta-mercaptoethanol, 1x ITS-A, 1x N2 supplement and 1x B27+ vitamin A supplement.
Dorsal forebrain differentiation protocol
For the duration of early patterning (day 0–7), culture media was supplemented with LDN193189 (200 nM, Tocris), with FGF2 (20 ng/ml, Peprotech) added to the media from day 2. Note media was changed daily from day 0–7. Cells were switched to ‘maturation medium’ consisting of: N2B27 supplemented with glial cell-derived neurotrophic factor (GDNF, 30 ng/ml, R&D), brain-derived neurotrophic factor (BDNF, 30 ng/ml, R&D), ascorbic acid (AA, 0.2 mM, Sigma-Aldrich) and the γ-secretase/Notch inhibitor DAPT (10 uM, Tocris) from day 7–14, with media replaced every 2 days, Fig. 3A.
Ventral mesodiencephalic differentiation protocol
Ventralization from the default dorsal forebrain phenotype was achieved by modulation of the Shh signaling pathway using Shh recombinant protein (C24II, 200 ng/ml) from day 1–7 of PSC patterning20. This supplementation of the media was employed firstly to demonstrate broad neural ventralization of the PSC (resulting in a ‘ventral mesodiencephalic’ population), and was compared to dorsal forbrain and hindbrain regions. Refinement of this ‘ventral mesodiencephalic’ protocol, through the timely delivery of factors to modulate ventralisation (Shh and Wnt) and caudalization (Wnt and FGF8), resulted in more restricted VF fates including rostral VF populations reflective of the ganglionic eminences, a more caudal VF populations inclusive of ‘hypothalamic -like’ neurons and finally a ventral midbrain population. These refined VF and VM protocols are detailed below.
Caudal ventral forebrain differentiation protocol
For caudal ventral forebrain (cVF) specification, inclusive of hypothalamic-like neural progenitors/neurons, the patterning medium was supplemented with LDN193189 (200 nM, Tocris), SHH (24CII) (200 ng/ml, R&D) and the smoothened receptor agonist purmorphamine (2 uM, PM) (Stemgent) from day 1–7. From day 3–7, FGF2 (20 ng/ml) was added to the media. On day 7–14, cells were transitioned to maturation medium, Fig. 4A.
Rostral Ventral forebrain differentiation protocol
For rostral ventral forebrain (rVF) specification, including ganglionic eminence-like progenitors/neurons, patterning medium was supplemented with LDN193189 (200 nM), and early ventralisation controlled using the Wnt signaling antagonist XAV939 (2 uM, Tocris) from day 0–5. At day 6, medium was supplemented with Shh (200 ng/ml), and PM (2 uM), and changed at day 7 and 9. The medium was switched to maturation medium from day 10–14, with media changes every 2 days, Fig. 5A.
Ventral midbrain differentiation protocol
For ventral midbrain specification, SHH (200 ng/ml), PM (2 uM) and FGF8 (25 ng/ml) were added to the patterning media from day 1–7, and further supplemented with CHIR9902 (0.3 uM) from day 2–7, as well as FGF2 (20 ng/ml) on day 3–7. From day 7–14 cells were cultured in maturation medium, Fig. 6A.
Hindbrain differentiation protocol
For hindbrain specification, S/L-dependent mouse PSC were cultured in patterning media supplemented with SHH (200 ng/ml) from day 0, with PM (2 uM) and FGF8 (25 ng/ml) added from day 1. On day 2, CHIR99021 (1.5 uM) was added to the media as well as FGF2 (20 ng/ml) at day 3. Media was changed daily from day 0–5. On day 7, differentiated cells were changed into maturation medium, with medium changes every second day until day 14, Fig. 7A.
Immunocytochemistry and Quantification
Day 0 (undifferentiated PSCs) and day 7, 11 or 14 differentiated cells were fixed and immunostained, using previously described methods21. See Supplementary table 1 for primary antibodies. Secondary antibodies, generated in donkey and conjugated to Alexa Flour 488, 555 and 649 were used at 1:200 (Jackson Immunoresearch). 4′,6-diamidino-2-phenylindole (DAPI, SigmaAldrich) nuclear counterstain (1:2000) was used to visualize cells in culture. All the images were captured using a fluorescence microscope (Zeiss Axio Observer Z1).
Quantification of TH+/FoxA2+ and TH+/Pitx3+ (GFP) immunoreactive cells within caudal ventral forebrain and ventral midbrain cultures at day14, established from naïve mESC and iPSC, were performed to differentiate between the dopaminergic neurons generated within these two culture conditions. Using a 20x objective, ten fields of views from three technical, and performed on at least 3 biological replicates were counted. Quantification was performed using Zen Blue software (Zeiss).
Flow cytometry
Intracellular staining was performed using BD transcription factors staining kit (BD), according to the manufacturer instructions with modifications. In brief, cells (1 × 106/tube) were incubated with BD Horizon fixable viability dye450 (1:1000) diluted in PBS (4 °C, 25 min). Cells were rinsed in PBS, resuspended in fixation buffer (4 °C, 45 min), and washed with ice-cold permeabilization/washing buffer. Fixed cells were incubated with primary antibodies (see supplementary table 1), diluted in permeabilization/washing buffer, overnight at 4 °C. The following day, cells were washed in permeabilization/washing buffer and incubated in secondary antibody (anti mouse-APC,1:500, Santa Cruz), for 45 minutes. After 3x washes in permeabilization buffer, cells were resuspended in ice-cold 500ul running flow buffer consisting of 1% bovine serum albumin (BSA) (Sigma-Aldrich) and 0.5 mM EDTA (Sigma-Aldrich) diluted in PBS. Flow cytometry analysis was performed using Beckman-Coulter CyAn analyzer (Beckman-Coulter), Summit 4.3 software (Beckman-Coulter) and Flowlogic software (Inivai technologies). Single cells were gated based on forward-side scatter profiles and dead cells excluded using violet Horizon viability dye (data not shown). Undifferentiated ESCs were used as a negative control to set gates.
Quantitative real-time PCR
Total RNA was extracted at day 0, 7 and 14 using Trizol (Ambion). RNA was converted to cDNA and subsequently analyzed using quantitative real-time PCR (qPCR), using previously described methods22, to show the expression of 7 pluripotency-related genes (Nanog, Sox2, Oct4, Rex1, Bmp7, Wnt7a and Id1) as well as the temporal expression of numerous regionally specified neural-related genes (Pax6, Emx2, Nkx2.1, Gsx2, Otp, Nhlh2, TH, En1, Nurr1, Lmx1a, Zic1 and Hoxa1). More detailed analysis, to further confirm GE differentiaton, was performed at days 0, 10 and 14, and included the following genes: Nkx2.1, Gsx2, Lhx2, Olig2, Dlx2 and Gad67. See Supplementary Table 2 for primer sequences.
Student t-tests and One-way ANOVAs were performed where appropriate to identify differences within the data, with significance set at p < 0.05.
Results
Improved neural differentiation from naïve mESCs
S/L-dependent mESCs (cultured in serum and LIF medium1) and naïve mESCs (cultured in 2i medium13), were first examined by immunocytochemistry to confirm robust expression of cardinal pluripotent transcription factors, Oct4 and Sox2 (Fig. 1A,B). Consistent with this observation, quantitative PCR revealed similar high levels of expression for the pluripotency genes Oct4, Sox2, Nanog and Rex1 in both S/L-dependent and naïve mESC cultures (Fig. 1C). Further, flow cytometry quantification could not statistically distinguish the two mESC populations, with ≥98% of cells immunoreactive for Oct4 (Fig. 1G,G’). While naïve ground state and S/L-dependent PSC displayed similarities in their expression of pluripotent genes, recent studies have recognized transcriptional differences between these two pluripotent staged cell populations15. In concerrence, we show significantly elevated expression of Wnt7a and Id1, as well as a down regulation of Bmp7, in our S/L-dependent mESCs compared to naïve mESCs, Fig. 1D.
S/L-dependent and naïve mESCs were directed towards a neural fate by antagonism of Smad signaling, similar to that widely employed for monolayer-based differentiation cultures of human PSC, Figure 1E,E’23. This minimalist default differentiation system produced a heterogeneous population of cell types from S/L-dependent mESCs following 7 days of differentiation, including Nestin immunoreactive cells indicative of NPCs (45 ± 19.3%, Fig. 1I”,J,P), some of which possessed a dorsal forebrain identity as shown by expression of the dorsal marker Pax6 (11.7 ± 6.5%, Fig. 1K”,L,Q), or a ventral identity determined by expression of the floor-plate marker FoxA2 (11 ± 3.6%, Fig. 1M”,N,R).
Differentiation of naïve mESCs under the same conditions was significantly compromised by excessive cell death and limited NPC fate acquisition (Fig. 1E’,F’). We speculated these complications arose from directing naïve pluripotent cells into a neural fate immediately. To circumvent this, 2i cultured naïve mESCs were subjected to a 24-hour incubation in LIF-primed media prior to Smad inhibitor exposure (Fig. 1E”) to mimic the graduated transition that occurs during embryonic development from the pre-implantation (naïve pluripotency) to post-implantation stage, and then to germ layer restriction.
Indeed, differentiated naïve mESCs that incorporated a LIF-primed transition showed high viability (>86%, Fig. 1F”), efficiently neuralized (77 ± 4.6% Nestin expression, Fig. 1I”’,J) and generated a homogenous population of NPCs (Fig. 1T). Although the increase in Nestin expression in naïve-LIF-differentiated cultures was not significant compared to differentiation of S/L-dependent cultures (Fig. 1J), patterning from a naïve state was drastically less variable (Fig. 1P,T), and contained extremely few OCT4-positive cells (0.3 ± 0.3% vs 13 ± 4.7% Oct4+, Fig. 1G”,G”’,H,O,S). Furthermore, naïve-LIF-differentiated cultures acquired a more reproducible and restricted phenotype, with Pax6 expressed in 55 ± 9.8% of day 7 cultures (Fig. 1K”’,L,U), and the absence of off-target floor-plate progenitors marked by FoxA2 (Fig. 1M’”,N,V). These results demonstrate for the first time a feeder-free monolayer directed differentiation platform for the robust production of NPCs from naïve mESCs.
It is important to note that neuralization via Smad inhibition was achieved by blocking only one of two major Smad families. Specifically, the BMP inhibitor LDN193189 was used to block Smads 1/5/8 while the TGF-β pathway, that regulates Smad 2/3 signaling and is typically co-inhibited in human neural differentiations, was not modulated. This was due to our observation that Smad 2/3 inhibition (using SB431542) prevented naïve mESC exiting from their pluripotent state, with cells showing maintained Oct4 expression (data not shown) – a finding that supports previous studies describing the necessity for Smad 2/3 suppression in the maintenance of ground state identity24.
Bi-directional regional neural specification
In light of the improved reproducibility and homogenicity of neural induction from naïve, compared to S/L-dependent, mESC cutlures we next assessed the capacity for ground state cells to be redirected along developmental dorso-ventral and rostro-caudal axes using a range of small molecules and morphogens to mimic in vivo embryonic signals and obtain dorsal forebrain, ventral mesodiencephalic and hindbrain cultures (Fig. 2). Naïve mESCs differentiated in basal Smad inhibition conditions (LDN193189 from day 0–7) resulted in the ‘default’ acquisition a dorsal forebrain identity with progenitors expressing the forebrain-midbrain marker Otx2, dorsal transcription factor Pax6, as well as absence of the ventral marker, FoxA2, and hindbrain marker, Zic1 (Fig. 2D–K). Ventralization from this dorsal phenotype was achieved with the addition of Shh (day 1–7), causing a predicted loss of Pax6 expression, upregulation of FoxA2, and maintenance of Otx2 identity (Fig. 2L–O). In the absence of caudalising cues, the resultant population was a heterogeneous pool of both ventral diencephalic and ventral mesencephalic progenitors, here termed ‘ventral mesodiencephalic’, Fig. 2L–O.
The addition of high concentrations of the GSK3ß inhibitor and potent caudalizing small molecule CHIR9902 (day 2–7), as well as FGF8, was sufficient to push culture identity beyond the isthmic organizer, resulting in the generation of hindbrain NPCs, as determined by upregulation of Zic1 and surpression of Otx2 expression (Fig. 2P–S). Each of these regionally patterned populations (dorsal, ventral and caudalized) were further modulated, matured and characterized in subsequent Figs (3–8).
Naïve PSC differentiated to dorsal forebrain neurons
Confirming the dorsal forebrain progenitor identity seen by immunocytochemistry (Otx2+, Pax6+, Nestin+, Figs 2H–I and 3C,D), we observed elevated transcript levels of Pax6 and the cortical lineage marker Emx2 25, by day 7 (Fig. 3B). These progenitors were driven to a post-mitotic fate by the removal of NPC morphogens and growth factors (LDN193189 and FGF2), in conjunction with the addition of the Notch pathway small molecule inhibitor DAPT26, and pro-neuronal survival trophins including BDNF, GDNF and Ascorbic Acid (Fig. 3A). Following 4 days of maturation (day 11), the emergence of the neuronal cytoskeletal protein TUJ1, with the intermediate cortical progenitor marker Tbr2, were seen (Fig. 3E). These results mirror cortical developmental where transient Tbr2 expression follows Pax6 expression before radial glial cells commit to specific laminar layers27. With extended culturing in maturation conditions, increased numbers of Tuj1+ post-mitotic neurons were observed and co-expressed the mature cortical laminar layer markers Tbr1 (Fig. 3F) and Ctip2 (Fig. 3G). Importantly, we were able to demonstrate the ability to robustly reproduce these findings, also differentiating naïve ground state iPSCs to a dorsal forebrain fate (Supplementary Figure 1).
Naïve PSC differentiated to ventral forebrain neurons
While modulation of the Shh signaling was shown to impact default dorsal forebrain identity of PSC, resulting in acquisition of ventral neural tube fate (Fig. 2), minimal assessment of the directed differentiation and resultant progenitors/neurons was performed. Here we importantly demonstrate that ventralized cultures maintained the robust downregulation of Oct4 and upregulation of the NPC identity marker Nestin (Fig. 4B,C,F). As anticipated, ventral signals induced the loss of dorsal (Pax6) identity (Fig. 4D,F,G) and significant upregulation of the ventral neural tube marker FoxA2 (67.3 ± 2.2%, Fig. 4E,F). The ventral forebrain is the site of hypothalamus formation, and indeed hypothalamic transcripts Nhlh2 and Otp were significantly up-regulated under these conditions28,29, suggesting the adoption of a diencephalic fate (Fig. 4G). By maturating ventral forebrain NPCs, TUJ1 post-mitotic neurons arose throughout cultures with significant GABA expression (Fig. 4I), further supporting hypothalamic-like identity. While numerous FoxA2 and TH + cells were observed by day 14, these populations showed minimal overlap, reflective of VF hypothalamic dopaminergic neurons (as opposed to FoxA2 + /TH + dopamine neurons that reside in the adjacent VM) (Fig. 4J and Supplementary Figure 2),30.
Interestingly, these ventral forebrain cultures did not express transcription factors Nkx2.1 or Olig2 (data not shown), in vivo markers of the adjacent developing ganglionic eminence, GE. Early developmental studies have demonstrated that inhibition of Wnt/beta-catenin signaling is important for the up-regulation of key GE transcription factors such as Dlx2, Mash1 and Gsx2, and suppression of pallial markers such as Emx2 31. Thus, we theorized the addition of a Wnt inhibiting small molecule (XAV939) may induce the generation of GE identity. Indeed, in response to XAV, efficient Nestin+ and Otx2+ forebrain NPCs were observed (Supplementary Figure 3) in conjunction with broad expression of Nkx2.1 and Olig2 (Fig. 5B, Supplementary Figure 4), suggestive of a GE phenotype.
Continued maturation promoted widespread co-expression of TUJ1 and GABA (Fig. 5C,D and Supplementary Figure 4) that may represent lateral ganglionic eminence (LGE)-derived striatal neurons or MGE-derived interneurons. This former population can be identified by the co-expression of DARPP32+/CTIP2+ (that were not present within these cultures, Fig. 5L), and the latter, MGE neurons, confirmed by co-expression of GABA+/NKX2.1+ and NKX2.1+/OLIG2+ (Fig. 5D–E, Supplementary Figure 4). Validating these findings, transcript assessment showed significant upregulationof a raft of MGE-related NPC genes including Nkx2.1, Gsx2, Lhx6, Olig2 and Dlx2 (Fig. 5F–J), (Corbin et al., 2000) and upregulation with maturation of Gad67, an enzyme important in GABA synthesis for GE neurons (Fig. 5K). Of interest, in the development of this rVF protocol we have observed that earlier ventralisation of the cultures (though the administration of Shh+ PM from day 3–7) resulted in cells co-expressing DARPP32+ CTIP2+ GAD67+ (Fig. 5M) and comparatively low Nkx2.1 transcript compared to the “GE/MGE-like” populations (Fig. 5F, grey line), suggestive of a possible “LGE-like” fate (Fig. 5M). Taken together, this data indicates GE-like NPCs and neurons from ground state mPSC.
Naïve PSC differentiated to ventral midbrain neurons
During embryonic development, the isthmic organizer, at the midbrain-hindbrain boundary, secretes FGF8 and Wnts to instruct the formation of the mesencephalon32, Fig. 2B,C, including important populations of dopaminergic neurons. To generate ventral midbrain (VM) NPCs, our caudal ventral forebrain protocol was supplemented with FGF8 (25 ng/ml) and the Wnt agonist CHIR9902 (0.3 uM). This was seen to maintain cultures rich in NPC (81 ± 4.3% Nestin+, Fig. 6C), that expressed appropriately high levels of regional identity proteins FoxA2 (70 ± 3.5%, Fig. 6E,F,H) and the forebrain-midbrain marker Otx2 (Fig. 6H,I). Further, these cultures upregulated essential VM dopaminergic transcripts Lmx1a, Nurr1 and En1 (Fig 6G)32. To further confirm aVM identity of these NPCs we utilized a mESC reporter line for Lmx1a (Lmx1a-eGFP); a transcription factor expressed in the roof plate and throughout the developing floor plate. In combination with Otx2 and FoxA2, the mesendiencephalic floor plate can be distinguished from roofplate NPCs and ventral hindbrain, and was seen in differentiating cultures derived from naïve mESCs (Fig. 6I).
Ongoing maturation of the cultures to day 14, resulted in the presence of many tyrosine hydroxylase (TH – the rate-limiting enzyme in dopamine synthesis) expressing neurons (Fig. 6J,K). To differentiate between dopaminergic neurons of the caudal ventral forebrain and the ventral midbrain, we quantified TH+/FoxA2+ neurons (a population restricted to VM dopaminergic neurons). Under VM differentiation conditions, the majority (>75%) of TH+ neurons co-expressed FoxA2 (Fig. 6J,L and Supplementary Figure 5). VM dopamine neurons were characterized further using both the Lmx1a-eGFP and Pitx3-eGFP knockin reporter line that mark bona fide VM DAs33. In terminally differentiated cultures neurons co-expressing TH, Lmx1a and FoxA2 were observed (Fig. 6J), with the majority of TH+ neurons (69 ± 10.4%) co-expressing Pitx3-GFP (Fig. 6K,M). By comparison, ventral forebrain differentiation of the Pitx3-eGFP mESC reporter line saw significantly less TH+/FoxA2+ (28 + 4%, Fig. 6L), and Pitx3-GFP+/TH+ co-expressing neurons (22 ± 6%, Fig. 6N), both likely off-target populations within the VF differentiations, yet overall importantly highlight the capacity to direct naïve mPSC into ventral diencephalic or mesencephalic dopaminergic populations.
Naïve PSC differentiated to ventral hindbrain neurons
Upon successful generation of VM progenitors and neurons, we finally questioned whether stronger activation of the canonical Wnt pathway would be sufficient to driving NPCs beyond the isthmus and adopt a more caudal, hindbrain phenotype. Indeed, 5-fold higher CHIR9902 concentrations (1.5 uM), in conjunction with Shh signaling, resulted in high yields of neural progenitors (79.3 + 6.8% Nestin+), that lacked rostral (Otx2, Fig. 7Iiii) and dorsal (Pax6, Fig. 7D,F) identity markers, and showed appropriate ventralization (54.7 + 11.5% FoxA2+, Fig. 7E,F,I) at day 7. More specifically, day 7 cultures showed upregulated transcript levels of two key hindbrain genes, Hoxa1, crucial for the early patterning of the rhombencephalon34 and Zic1, a key regulator of progenitor proliferation during early cerebellar development35, Fig. 7G,H. Following maturation, immunocytochemical analysis at day 14 revealed the presence of numerous 5-HT+ serotonergic neurons, a population enriched throughout the hindbrain36, (Fig. 7J, Supplementary Figure 6) as well as hindbrain Islet1+ motor neurons (Fig. 7K, Supplementary Figure 6),37. Taken together, these observation strongtly indiciate ventral hindbrain NPC generation from naïve PSC under these ventro-caudalizing conditions.
In support of the capacity for these 5 described protocols to generate neural progenitors and neurons with relatively restricted regional identity, we compared transcript levels for genes known to be expressed within these areas in embryonic development, and their absence (or appropriately comparative levels) from other fate specified populations. Given the ‘default’ generation of dorsal forebrain progenitors/neurons upon Smad-induced neuralisation of PSC, gene expression within ventral and caudalised populations was compared to this DF population. As such, we demonstrated the dorsal forebrain gene, Pax6, was significantly downregulated in all 4 other neural populations (Fig. 8A). Ganglinic eminence genes, Nkx2.1 and Gsx2, were strongly expressed only in rostral ventral forebrain cultures (Fig. 8B,C). Within caudal ventral forebrain, inclusive of hypothalamic progenitors, Nhlh2 was notably increased, and significantly elevated compared to more caudal VM and HB populations (Fig. 8D). Nhlh2 however was also shown to be upregulated within the adjacent rostral VF (GE-like) cells, expression that may be explained by its known identity within subependymal neural progenitors. Reflective of in vivo neural development and the localization of dopaminergic subpopulations, Lmx1a and TH gene expression were elevated in caudal ventral forebrain (cVF) and VM progenitors and lowly expressed in more rostral (DF, cVF) and caudal (hindbrain) cultures (Fig. 8E,F). Finally, the rhombencephalic related genes Hoxa1 and Zic1 were only upregulated in HB cultures (Fig. 8G,H).
Significance Statement
Mouse pluripotent stem cells (PSC) provide a valuable tools in research as differentiation is rapid, cost effective and an extensive repertoire of transgenic lines contribute in our understanding of biology. However, protocols for directed differentiation of these cells has been inferior to their human counterparts. Here we demonstrate enhanced efficiency of neural differentiation by commencement from a naiive pluripotent ground-state, compared to a conventionally employed serum-dependent PSCs. Subsequently we establish novel protocols, involving early morphogen patterning of monolayer, feeder-free naïve PSC, resulting in the efficient generation of region-specific (dorsal forebrain, ventral forebrain, ventral midbrain and hindbrain) progenitors and neurons. These resultant populations will be valuble in understanding neural development, as well as disorders affecting specific neuronal populations.
Discussion
Previous studies have described the generation of region-specific neural subtypes from Serum/LIF-dependent mESCs2,3,5,38. While these protocols have provided means to study neural development and transplantation therapies, several limitations are associated with these protocols - namely the variability between differentiations and heterogeneity of derived cultures. Underlying these variables is the reliance on either stromal cell co-culture that introduces a variable milieu of patterning signals, the use of spontaneous embryoid bodies for uncontrolled default differentiation and/or failures to replicate developmental sequences appropriately.
By comparison, neural differentiation strategies for human PSC have advanced significantly, with numerous studies generating largely homogenous cultures of specific regional neural subtypes under rapid patterning methodologies8,9,10,11. While these human protocols are invaluable to the field, rodent counterparts nevertheless remain an important biological tool that in comparison possess malleable genomes and undergo rapid embryonic development. These traits facilitate cost and time effective investigations into mammalian development as well as intra-species cell transplantation studies.
We report here the development of a feeder-free directed neural differentiation system for mESCs and iPSCs along dorso-ventral and rostro-caudal axes that aligns mPSC differentiation closely with human PSC counterparts. By mimicking in vitro the rapid temporal signaling gradients of rodent development an array of neural lineages from the dorsal forebrain, ventral forebrain, ventral midbrain and hindbrain were formed within 14 days, Fig. 8K.
Furthermore, we describe for the first time the use of naïve mPSC as a starting material for neural differentiation, resulting in differentiated cultures of improved homogeneity and eliminated contaminant pluripotent cells (Fig. 1). Interestingly the employment of naïve mPSC required precise recapitulation of developmental sequences, involving the transition of naïve cells (reflective of pre-implantation) temporarily to a more primed (post-implantation) state, by exposure to LIF, prior to the initiation of differentiation. The use of naïve mPSC likely underlies the robust fate restriction reported here, as recent studies evaluating naïve and serum-dependent mESC have demonstrated that the former are intrinsically homogeneous at transcriptomic and epigenomic levels, while the later express a range of lineage-specific genes likely to interfere with differentiation14,17.
Additional to the use of naïve stem cells, early patterning of mPSC reflective of embryonic development was imperative for restricted neural fate specification. Smad inhibition at the onset of differentiation forced neural specification towards the default dorsal forebrain lineage, reflected by the generation of Pax6+, Tbr2+, Emx2+ NPCs, and mirroring human PSC protocols. Following initial patterning and a week of maturation in neurotrophic factors and notch inhibitors, to induce a transition to post-mitotic neurons, dorsal forebrain progenitors differentiated into mature neurons that expressed anticipated cortical layer markers, such as Tbr1 and Ctip2. Previous mPSC protocols have reported the use of small molecule antagonists of the Sonic Hedgehog pathway to promote cortical specification from serum/LIF-dependent mESCs5,39. Our study found Shh inhibition was not required as evidenced by limited expression of the ventral floor-plate marker Foxa2 cells within the cultures and enrichment of dorsal cortical markers. Initiating differentiation from naïve mESC, as opposed to S/L-dependent mESCs, may underlie the improved patterning outcomes, despite a reduction in administered extrinsic signals.
Conversely, Shh agonists (Purmorphamine and recombinant Shh) were required from day 1 of differentiation to ventralize NPCs, resulting in the acquistation of a hypothalamic identity, as seen by expression of Nhlh2 and Otp 40. Of note, ventralized NPCs did not express ganglionic eminence markers such as Nkx2.1 and Olig2, suggesting that the Shh and puromorphomine concentrations used maximally activated the Shh pathway to produce the most ventral structures of the developing rostral neural tube. Following maturation, ventral forebrain NPCs developed into GABAergic neurons, a population present in the adult hypothalamus, in addition to FoxA2-TH+ dopaminergic neurons40,41. Interstingly, the addition of early Wnt inhibition, to reduce the extent of floorplate ventralization of the progenitors, we were able to direct cells to adopt a GE identity. These GE-like progenitors were positive for both Nkx2.1 and Olig2 and continued to develop into GABAergic neurons, a neuronal population that arises in the GE during ventral telencephalic development.
Differentiation to the ventral midbrain was also acheived from naïve mPSCs with the addition of caudalization cues supplied by FGF8 and the GSK3b inhibitor CHIR9902. This sequence of immediate neuralization, ventralization and rostrocaudal positioning closely mirrors hPSC protocols for VM differentiation8,10,21. Post-mitotic neurons were obtained rapidly, within 14 days of differentiation, and confirmed to be bona fide ventral midbrain dopamine neurons based on Lmx1a and Pitx3 expression (using our knock-in reporter line) as well as FoxA2, in addition to gene transcripts for En1, Nurr1 and Lmx1a.
Finally, ventral hindbrain specification was attained by increased caudalisation of NPCs, using amplified canonical Wnt activation. Resultant NPCs showed evidence of hindbrain identity based upon the presence of Zic1 and lack of Otx2 immunoreactivity in conjunction with motor (Islet1) and serotonergic (5-HT) neuronal markers36,37.
Demonstrative of the efficiency of these new protocols was the relative lack of off-target cells within each populations. Genes known from in vivo neural development to be to be expressed within select progenitor/neuronal populations were largely restricted within our cultures, such that dorsal identity gene Pax6 was only expressed in dorsal forebrain cultures, yet rhombencephalic genes, Hoxa1 and Zic1, were only expressed in hindbrain cultures. While the combination of restricted transcriptional gene expression as well as protein expression (and often protein co-localisation) enabled confirmation of the presence of restricted neuronal populations in some instances, we recognize that the efficiency of these differentiation protocols still contains variability, and one that is higher than comparative human PSC protocols. At one extreme >75% of naïve mESC, differentiated along a VM lineage, adopted a VM DA fate (quantitatively confirmed by TH+ Pitx3+ colocalisation) yet VF cultures were decidedly more heterogeneous (including both GABA+, TH+ FoxA2− and TH+ FoxA2+ neurons), proving challenging for defintive confirmation of neuronal identity from adjacent populations. We speculate that this, in part, is a likely consequence of the rapid differentiation of mouse PSCs compared to human counterparts that allows less time to orchestrate regional patterning and terminal signals. As such, more extensive transcriptional, histochemical, biochemical and functional/electrophysiological profiling studies will be required in order to confirm the true efficiency of each of these described protocols.
In summary, using naïve mPSC, we describe the robust regional specification of PSC through the early manipulation of rostro-caudal and dorso-ventral morphogen gradients. These protocols, and resultant neuronal populations, may provide important tools for understanding mammalian neural development, aide in our understanding of brain-related disorders and assist in the development of new targeted therapies including cell replacement therapy.
Ethics declarations
Competing Interests
The authors declare that they have no competing interests.
Additional information
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.
Evans, M. J. & Kaufman, M. H. Establishment in culture of pluripotential cells from mouse embryos. Nature 292, 154–156 (1981).
- 2.
Kawasaki, H. et al. Induction of midbrain dopaminergic neurons from ES cells by stromal cell-derived inducing activity. Neuron 28, 31–40 (2000).
- 3.
Barberi, T. et al. Neural subtype specification of fertilization and nuclear transfer embryonic stem cells and application in parkinsonian mice. Nature biotechnology 21, 1200–1207 (2003).
- 4.
Ying, Q. L. & Smith, A. G. Defined conditions for neural commitment and differentiation. Methods Enzymol 365, 327–341 (2003).
- 5.
Gaspard, N. et al. Generation of cortical neurons from mouse embryonic stem cells. Nat Protoc 4, 1454–1463, https://doi.org/10.1038/nprot.2009.157 (2009).
- 6.
McCreedy, D. A. et al. A new method for generating high purity motoneurons from mouse embryonic stem cells. Biotechnol Bioeng 111, 2041–2055, https://doi.org/10.1002/bit.25260 (2014).
- 7.
Gaspard, N. & Vanderhaeghen, P. Mechanisms of neural specification from embryonic stem cells. Curr Opin Neurobiol 20, 37–43, https://doi.org/10.1016/j.conb.2009.12.001 (2010).
- 8.
Kriks, S. et al. Dopamine neurons derived from human ES cells efficiently engraft in animal models of Parkinson’s disease. Nature 480, 547–551, https://doi.org/10.1038/nature10648 (2011).
- 9.
Shi, Y., Kirwan, P. & Livesey, F. J. Directed differentiation of human pluripotent stem cells to cerebral cortex neurons and neural networks. Nat Protoc 7, 1836–1846, https://doi.org/10.1038/nprot.2012.116 (2012).
- 10.
Kirkeby, A. et al. Generation of regionally specified neural progenitors and functional neurons from human embryonic stem cells under defined conditions. Cell reports 1, 703–714, https://doi.org/10.1016/j.celrep.2012.04.009 (2012).
- 11.
Qu, Q. et al. High-efficiency motor neuron differentiation from human pluripotent stem cells and the function of Islet-1. Nat Commun 5, 3449, https://doi.org/10.1038/ncomms4449 (2014).
- 12.
Weinberger, L., Ayyash, M., Novershtern, N. & Hanna, J. H. Dynamic stem cell states: naive to primed pluripotency in rodents and humans. Nat Rev Mol Cell Biol 17, 155–169, https://doi.org/10.1038/nrm.2015.28 (2016).
- 13.
Ying, Q. L. et al. The ground state of embryonic stem cell self-renewal. Nature 453, 519–523, https://doi.org/10.1038/nature06968 (2008).
- 14.
Guo, G. et al. Serum-Based Culture Conditions Provoke Gene Expression Variability in Mouse Embryonic Stem Cells as Revealed by Single-Cell Analysis. Cell Rep 14, 956–965, https://doi.org/10.1016/j.celrep.2015.12.089 (2016).
- 15.
Marks, H. et al. The transcriptional and epigenomic foundations of ground state pluripotency. Cell 149, 590–604, https://doi.org/10.1016/j.cell.2012.03.026 (2012).
- 16.
Taleahmad, S. et al. Proteome Analysis of Ground State Pluripotency. Sci Rep 5, 17985, https://doi.org/10.1038/srep17985 (2015).
- 17.
Kolodziejczyk, A. A. et al. Single Cell RNA-Sequencing of Pluripotent States Unlocks Modular Transcriptional Variation. Cell Stem Cell 17, 471–485, https://doi.org/10.1016/j.stem.2015.09.011 (2015).
- 18.
Nefzger, C. M. et al. Lmx1a allows context-specific isolation of progenitors of GABAergic or dopaminergic neurons during neural differentiation of embryonic stem cells. Stem Cells 30, 1349–1361, https://doi.org/10.1002/stem.1105 (2012).
- 19.
Watmuff, B., Pouton, C. W. & Haynes, J. M. In vitro maturation of dopaminergic neurons derived from mouse embryonic stem cells: implications for transplantation. PLoS One 7, e31999, https://doi.org/10.1371/journal.pone.0031999 (2012).
- 20.
Ye, W., Shimamura, K., Rubenstein, J. L., Hynes, M. A. & Rosenthal, A. FGF and Shh signals control dopaminergic and serotonergic cell fate in the anterior neural plate. Cell 93, 755–766 (1998).
- 21.
Niclis, J. C. et al. Efficiently Specified Ventral Midbrain Dopamine Neurons from Human Pluripotent Stem Cells Under Xeno-Free Conditions Restore Motor Deficits in Parkinsonian Rodents. Stem cells translational medicine 6, 937–948, https://doi.org/10.5966/sctm.2016-0073 (2017).
- 22.
Blakely, B. D. et al. Wnt5a regulates midbrain dopaminergic axon growth and guidance. PLoS One 6, e18373 (2011).
- 23.
Chambers, S. M. et al. Highly efficient neural conversion of human ES and iPS cells by dual inhibition of SMAD signaling. Nature biotechnology 27, 275–280, https://doi.org/10.1038/nbt.1529 (2009).
- 24.
Hassani, S. N., Pakzad, M., Asgari, B., Taei, A. & Baharvand, H. Suppression of transforming growth factor beta signaling promotes ground state pluripotency from single blastomeres. Hum Reprod 29, 1739–1748, https://doi.org/10.1093/humrep/deu134 (2014).
- 25.
Molyneaux, B. J., Arlotta, P., Menezes, J. R. & Macklis, J. D. Neuronal subtype specification in the cerebral cortex. Nat Rev Neurosci 8, 427–437, https://doi.org/10.1038/nrn2151 (2007).
- 26.
Borghese, L. et al. Inhibition of notch signaling in human embryonic stem cell-derived neural stem cells delays G1/S phase transition and accelerates neuronal differentiation in vitro and in vivo. Stem Cells 28, 955–964, https://doi.org/10.1002/stem.408 (2010).
- 27.
Englund, C. et al. Pax6, Tbr2, and Tbr1 are expressed sequentially by radial glia, intermediate progenitor cells, and postmitotic neurons in developing neocortex. J Neurosci 25, 247–251, https://doi.org/10.1523/JNEUROSCI.2899-04.2005 (2005).
- 28.
Wang, W. & Lufkin, T. The murine Otp homeobox gene plays an essential role in the specification of neuronal cell lineages in the developing hypothalamus. Dev Biol 227, 432–449, https://doi.org/10.1006/dbio.2000.9902 (2000).
- 29.
Vella, K. R., Burnside, A. S., Brennan, K. M. & Good, D. J. Expression of the hypothalamic transcription factor Nhlh2 is dependent on energy availability. J Neuroendocrinol 19, 499–510, https://doi.org/10.1111/j.1365-2826.2007.01556.x (2007).
- 30.
Pristera, A. et al. Transcription factors FOXA1 and FOXA2 maintain dopaminergic neuronal properties and control feeding behavior in adult mice. Proceedings of the National Academy of Sciences of the United States of America 112, E4929–4938, https://doi.org/10.1073/pnas.1503911112 (2015).
- 31.
Backman, M. et al. Effects of canonical Wnt signaling on dorso-ventral specification of the mouse telencephalon. Dev Biol 279, 155–168, https://doi.org/10.1016/j.ydbio.2004.12.010 (2005).
- 32.
Burbach, J. P., Smits, S. & Smidt, M. P. Transcription factors in the development of midbrain dopamine neurons. Annals of the New York Academy of Sciences 991, 61–68 (2003).
- 33.
Smidt, M. P. et al. Early developmental failure of substantia nigra dopamine neurons in mice lacking the homeodomain gene Pitx3. Development 131, 1145–1155, https://doi.org/10.1242/dev.01022 (2004).
- 34.
Barrow, J. R., Stadler, H. S. & Capecchi, M. R. Roles of Hoxa1 and Hoxa2 in patterning the early hindbrain of the mouse. Development 127, 933–944 (2000).
- 35.
Aruga, J. et al. Mouse Zic1 is involved in cerebellar development. J Neurosci 18, 284–293 (1998).
- 36.
Pattyn, A. et al. Ascl1/Mash1 is required for the development of central serotonergic neurons. Nat Neurosci 7, 589–595, https://doi.org/10.1038/nn1247 (2004).
- 37.
Pfaff, S. L., Mendelsohn, M., Stewart, C. L., Edlund, T. & Jessell, T. M. Requirement for LIM homeobox gene Isl1 in motor neuron generation reveals a motor neuron-dependent step in interneuron differentiation. Cell 84, 309–320 (1996).
- 38.
Jing, Y. et al. In vitro differentiation of mouse embryonic stem cells into neurons of the dorsal forebrain. Cell Mol Neurobiol 31, 715–727, https://doi.org/10.1007/s10571-011-9669-2 (2011).
- 39.
Sadegh, C. & Macklis, J. D. Established monolayer differentiation of mouse embryonic stem cells generates heterogeneous neocortical-like neurons stalled at a stage equivalent to midcorticogenesis. J Comp Neurol 522, 2691–2706, https://doi.org/10.1002/cne.23576 (2014).
- 40.
Blackshaw, S. et al. Molecular pathways controlling development of thalamus and hypothalamus: from neural specification to circuit formation. J Neurosci 30, 14925–14930, https://doi.org/10.1523/JNEUROSCI.4499-10.2010 (2010).
- 41.
DeMaria, J. E., Lerant, A. A. & Freeman, M. E. Prolactin activates all three populations of hypothalamic neuroendocrine dopaminergic neurons in ovariectomized rats. Brain Res 837, 236–241 (1999).
Acknowledgements
This research was supported by funding from the National Health and Medical Research Council, Australia. The Florey Institute of Neuroscience and Mental Health acknowledges the support from the Victorian Government’s Operational Infrastructure Support Grant. CLP was supported by a Senior Medical Research Fellowship provided by the Viertel Charitable Foundation, Australia. The authors thank Ms Mong Tien for her technical assistance.
Electronic supplementary material
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. | https://www.nature.com/articles/s41598-017-16248-x?elqTrackId=dc27f7f948d94b1c9f63c1b12078de13&error=cookies_not_supported&code=f6acb591-6a24-4717-848c-465352adf9c7 |
Neural stem cells are the self-renewing and oligopotent cell population that generate constituent cell types of the nervous system. Cultured neural stem cells would offer researchers accessible opportunities to answer fundamental questions in both neurodevelopment and cell biology. Current strategies of maintaining neural stem/progenitor cells in vitro largely rely on neurosphere cultures (Reynolds and Weiss, 1992) and/or genetic immortalization (Frederiksen et al., 1988; Sah et al., 1997), These approaches raise concerns about cellular heterogeneity and potential cell transformation. Our lab has recently reported the establishment of adherent mouse Neural Stem (NS) cell lines that undergo symmetrical self-renewal without genetic immortalization (Conti et at., 2005; Pollard et al., 2006). Here, I apply this approach to human and rat foetal tissue and describe the derivation and characterization of human and rat NS cell lines. I established Human foetal NS cell lines from elective termination tissue. Human NS cells are propagated as stable cell lines in the presence of both epidermal growth factor (EGF) and fibroblast growth factor 2 (FGF2), under which conditions they stably express neural precursor markers and exhibit negligible differentiation into neurons or glia. Human NS cells are induced to produce astrocytes, oligodendrocytes, and mature neurons upon exposure to appropriate differentiating conditions. Human NS cells are clonogenic stem cells. They are capable of generating clonal and tripotent cell lines from single deposited cells, demonstrating they represent self-renewing in vitro human neural stem cell populations. More importantly, human NS cells retain a diploid karyotype and constant neurogenic capacity for more than 100 generations, and their long-term stability does not require leukemia inhibitory factor (LW). Together with the demonstrations that human NS cells can be genetic modified and are accessible to multi-well time-lapse videomicroscopy, these cells creates the potential for high content genetic and chemical screens. In addition to human foetal tissue, adherent NS cells can also be derived from rat foetal brain and spinal cord. However, under standard expansion conditions supplemented with EGF and FGF2 (Conti et al. 2005), rat NS cells spontaneously become dormant after approximately 2 months expansion. Dormant rat cells exhibit stellate morphology and express the astroglial maker GFAP, but they still retain neural precursor makers such as Nestin and Sox2. I found that Bone Morphogenetic Protein (BMP) signals are responsible for generating quiescence of rat NS cells, and that FGF2 signaling inhibits BMP-induced astrocyte differentiation and therefore maintains stem cell potency. Applying NS cell conditioned medium or BMP antagonist Noggin could overcome cell quiescence, and by these means the long-term propagation of rat foetal NS cells can be maintained. In addition to foetal NS cells, Noggin also promotes the proliferation of adult rat subventricular zone (SVZ) neural precursors. These observations implies that the neurogenic but quiescent rat NS cells generated by BMP and FGF2 signals may reflect some characteristics of in vivo adult neural stem cells. Lastly, I undertake preliminary investigation of intracerebral transplantation using established NS cell lines. Mouse NS cells labelled with green fluorescent protein (GFP) were injected into cortex, striatum and hippocampus of both adult and neonatal mouse brain. I find NS cells can survive for at least 6 weeks after transplantation, although their migration appears limited. In adult brain, mouse NS cells differentiate into both astrocytes and morphological neurons expressing interneuron markers including Calretinin and Somatostatin. However, injected cells largely generate astrocyte in neonatal brain. These observations demonstrate that NS cells can be used as donor cells for transplantation studies. Future studies are required to evaluate how human and rat NS cell will behave after transplantation. It would also be informative to investigate whether cultured NS cells may contribute to functional repair in disease models. | http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.669308 |
Growing evidence indicates that retinoic acid, an oxidized derivative of vitamin A, is required for cells to commit to a neuronal phenotype. Nuclear retinoic acid receptors (RARs) and enzymes with the capacity to synthesize a retinoic acid persist in terminally-differentiated neural tissues such as adult eye and brain, and relatively high levels of retinoic acid have been measured in adult neural retina. Thus, it is likely that retinoic acid is functional in mature neuronal cells, perhaps influencing them to maintain their terminally-differentiated state. Recent results obtained from studies of double null mutant mice lacking RAP- beta2/RAR-gamma2 indicate a lack of photoreceptor differentiation, histologic defects in the retinal pigment epithelium, retinal dysplasia, and degeneration of the adult neural retina (Grondona et el. 1996). These results support the idea that RARs, presumably via interactions with vitamin A derivatives such as retinoic acid, stimulate neurogenesis and are involved in maintaining the integrity and function of adult retinal cells. It is of scientific and clinical interest to characterize the molecular basis for the involvement of vitamin V derivatives in processes affecting the maturation and maintenance of sensory neurons. The principal investigator proposes to investigate the nature of this involvement by (1) using an immunohistochemical approach to determine whether vitamin A status influence the differentiation and/or maturation of olfactory neurons (adult olfactory epithelium retains the ability to generate neurons throughout adult life), and (2) using a subtractive hybridization approach, to characterize the types and identities of genes that are regulated by vitamin A status in a continuously-differentiating (olfactory epithelium) and two terminally-differentiated (eye and brain) adult, neural tissues. Comparison of affected genes in eye, brain, and olfactory tissue will indicate whether vitamin A status regulates specific types of genes in regenerative and on-regenerative neural tissues. The results of the proposed studies will contribute to our fundamental understanding of processes that control normal neural development and function and will provided a basis for determining the mechanisms that underlie pathologies associated with neurodegenerative disease. | https://grantome.com/grant/NIH/S06-GM008092-26-22 |
Multiple births are associated with Assisted Reproductive Technology (ART) to a large extend.
In the last decade, the cases of women who have given birth to twins or even triplets have increased, which has led infertility specialists to establish a series of guidelines to prevent this, since multiple pregnancies entail more risks than singleton pregnancies.
For couples who have been trying to conceive for a long time, the desire to have a baby is so strong that sometimes they find shelter in the fact that chances of getting pregnant increase if more than a single embryo is transferred. The problem is, most of them do not take into account the risk of multiple births it entails.
Being pregnant with more than one embryo can lead to severe complications for both the health of the mother and the babies. For these reasons, carefully evaluating the pros and cons before making a decision as regards the number of embryos to transfer.
The main cons of multiple pregnancies are due to the risks it conveys for both the mother and the babies, including:
Preterm birth When delivery occurs before 37weeks of pregnancy.
Preeclampsia A type of high blood pressure that occurs during pregnancy. It causes kidney problems, causing the loss of proteins through urine.
Gestational diabetes A type of diabetes that appears for the first time when the woman gets pregnant. It typically appears after the first trimester.
C-section birth A surgical incision is performed on the abdomen and the uterus to deliver the babies.
Postpartum bleeding It is considered if the blood loss exceeds more than 500 ml after a vaginal birth, or over 1,000 ml after a C-section.
Miscarriage or vanishing twin syndrome Although most twin pregnancies develop without problems, the risk of miscarriage is higher than in singleton pregnancies.
Intrauterine growth restriction The fetus is unable to grow as much as it needs due to a deficiency of nutrients and/or oxygen. Usually, it is due to complications in the placenta.
Low birth weight It is diagnosed when the weight is below 2.5 kg.
Perinatal mortality Perinatal death occurs when the fetus dies at week 28 of pregnancy or later, or within the first seven days after being born.
Multiple embryo transfer should be avoided with a focus on a single embryo transfer only.
The “success” of an assisted reproductive treatment does not depend only on achieving pregnancy. We need to be attentive to the fact that the ultimate aim would be delivering a healthy baby and avoiding preterm deliveries. | https://www.drpankajtalwar.com/twins-after-ivf-or-iui/ |
Doctors in Saudi Arabia were surprised to find a year-old girl that has turned out to be pregnant. They say that it is the first incident in the history of modern medicine and are now discussing whether the removal of the fetus from the baby girl is going to be considered a murder.
The mother of the pregnant baby originally had two embryos during her pregnancy. One of the embryos began to develop in the uterus of the other child. In spite of the fact that doctors describe the incident as unique, there can be other similar examples found in history (listed below).
The anomalous phenomenon is known as fetus in fetu. Such incidents are extremely rare: an embryo inside an embryo may appear once in 500,000 pregnancies. The phenomenon always occurs at an early stage of pregnancy. As a rule, the fetuses die in mother’s womb.
It may also happen that a child with a fetus inside survives the entire pregnancy. In this case the embryo continues to live inside its owner’s body like a trapped parasite. A fetus in fetu can be considered alive, but only in the sense that its component tissues have not yet died or been eliminated. Thus, the life of a fetus in fetu is inherently limited to that of an invasive tumor. In principle, its cells must have some degree of normal metabolic activity to have remained viable. However, without the gestational conditions attainable (so far) only in utero with the amnion and placenta, a fetus in fetu can develop into, at best, an especially well-differentiated teratoma; or, at worst, a high-grade metastatic teratocarcinoma. In terms of physical maturation, its organs have a working blood supply from the host, but all cases of fetus in fetu present critical defects, such as no functional brain, heart, lungs, gastrointestinal tract, or urinary tract.
Other instances of this in the media include:
- In June 1999, the case of Sanju Bhagat a man from Nagpur, India attracted attention for the length of time (36 years) he had carried his parasitic “twin” inside his body, and the size of the growth. As Bhagat had no placenta the growth had connected directly to his blood supply.
- In March 2006, Doctors in Pakistan removed two fetuses from inside a two month old baby girl.
- In November 2006, a Chilean boy in Santiago was diagnosed with fetus in fetu shortly before birth.
- In August 2007, a two month old baby in Baguio from the Philippines named Eljie Millapes was diagnosed with fetus in fetu. The parents of Eljie Millapes were alarmed by the abnormal growth of the stomach of their two-month-old baby. Doctors later discovered that she was suffering from fetus in fetu.
- In January 2008, a two month old baby in Medan, Indonesia named Afiah Syafina was diagnosed with tumor in her stomach. After the operation was done at January 19, 2008, the results were startling enough. The suspected tumor was a five month old fetus.
- In May 2008, a two-inch embryo was removed from the belly of a 9-year-old girl at Larissa General Hospital in Athens after she was diagnosed with a tumor on the right side of her belly. The embryo was a fetus with a head, hair and eyes, but no brain or umbilical cord.
- In June 2008, a baby in Hejian city, China, was found to have an extra penis on his back, which doctors diagnosed as Fetus in fetu.
Related Articles: | https://www.growingyourbaby.com/fetus-found-inside-of-one-year-old-saudi-arabian-baby/ |
Long-term (chronic) health conditions in the mother (diabetes, epilepsy, or high blood pressure) Problems with the placenta that prevent the fetus from getting nourishment (such as placental detachment) Sudden severe blood loss (hemorrhage) in the mother or fetus. Heart stoppage (cardiac arrest) in the mother or fetus.
How long can a baby stay in the womb with no heartbeat?
No Fetal Heartbeat After Seven Weeks Gestation
If you are past seven weeks pregnant, seeing no heartbeat may be a sign of miscarriage. 1 But there are many exceptions to the “heartbeat by seven weeks” rule.
What happens if a fetus heart stops beating?
One complication that can arise is cardiac arrest in the baby during birth. While uncommon, when cardiac arrest happens in newborns, it can have devastating results, up to and including death. Cardiac arrest happens when the heart stops beating for some reason.
Can a baby still grow without a heartbeat?
This is called an anembryonic pregnancy, which is also known as a blighted ovum. Or it may be that your baby started to grow, but then stopped growing and they have no heartbeat. Occasionally it happens beyond the first few weeks, perhaps at eight weeks or 10 weeks, or even further on.
What are the risks of carrying a dead fetus?
Women who retain the dead embryo/fetus can experience severe blood loss or develop an infection of the womb. These are rare complications. Gastro-intestinal side effects such as nausea and diarrhoea, cramping or abdominal pain and fever have been reported with misoprostol.
Can an ultrasound be wrong about no heartbeat?
Miscarriages are predicted by doctors when a woman’s embryo or gestational sac seems too small, and when an ultrasound shows no fetal heartbeat. (In the cases included in the study, doctors had detected a gestational sac in the uterus, ruling out the risk of an ectopic pregnancy.)
What happens if baby has no heartbeat at 12 week scan?
If you are past 12 weeks and your baby’s heartbeat could not be detected using a fetal Doppler, your provider will likely recommend a fetal ultrasound (also known as a sonogram). This test will tell you whether or not there is cause for concern through the use of imaging.
How long after baby stops growing Do you miscarry?
If it is an incomplete miscarriage (where some but not all pregnancy tissue has passed) it will often happen within days, but for a missed miscarriage (where the fetus or embryo has stopped growing but no tissue has passed) it might take as long as three to four weeks.
Do you bleed if baby has no heartbeat?
In fact, a woman may not experience any symptoms and only learn of the loss only when a doctor cannot detect a heartbeat during a routine ultrasound. Bleeding during pregnancy loss occurs when the uterus empties. In some cases, the fetus dies but the womb does not empty, and a woman will experience no bleeding.
Can you miscarry without seeing blood?
Miscarriages are relatively common and it is possible to have a miscarriage without bleeding or cramping. The missed miscarriage is also known as “silent miscarriage”. It is called as “missed” because the body has not yet recognized that the woman is no longer pregnant.
Are there warning signs of stillbirth?
What to know about stillbirth. Stillbirth is the death of a baby before or during delivery. Warning signs may include bleeding or spotting. When the baby is in the womb, doctors use an ultrasound to determine if the heart is beating.
What does the hospital do with a stillborn baby?
Planning a Stillborn Baby Funeral
Some couples let the hospital deal with a stillborn baby’s remains; many medical centers even offer funeral ceremonies by in-house chaplains. | https://health9.org/pregnant/frequent-question-why-would-a-babys-heart-stop-in-the-womb.html |
Now we turn our attention to prenatal development which is divided into three periods: the germinal period, the embryonic period, and the fetal period. Here is an overview of some of the changes that take place during each period.
The germinal period (about 14 days in length) lasts from conception to implantation of the zygote (fertilized egg) in the lining of the uterus. During this time, the organism begins cell division and growth. After the fourth doubling, differentiation of the cells begins to occur as well. It’s estimated that about 60 percent of natural conceptions fail to implant in the uterus. The rate is higher for in vitro conceptions.
This period begins once the organism is implanted in the uterine wall. It lasts from the third through the eighth week after conception. During this period, cells continue to differentiate and at 22 days after conception the neural tube forms which will become the brain and spinal column. Growth during prenatal development occurs in two major directions: from head to tail (cephalocaudal development) and from the midline outward (proximodistal development). This means that those structures nearest the head develop before those nearest the feet and those structures nearest the torso develop before those away from the center of the body (such as hands and fingers). The head develops in the fourth week and the precursor to the heart begins to pulse. In the early stages of the embryonic period, gills and a tail are apparent. But by the end of this stage, they disappear and the organism takes on a more human appearance. About 20 percent of organisms fail during the embryonic period, usually due to gross chromosomal abnormalities. As in the case of the germinal period, often the mother does not yet know that she is pregnant. It is during this stage that the major structures of the body are taking form making the embryonic period the time when the organism is most vulnerable to the greatest amount of damage if exposed to harmful substances. (We will look at this in the section on teratology below.) Potential mothers are not often aware of the risks they introduce to the developing child during this time. The embryo is approximately 1 inch in length and weighs about 4 grams at the end of this period. The embryo can move and respond to touch at this time.
From the ninth week until birth, the organism is referred to as a fetus. During this stage, the major structures are continuing to develop. By the 12th week, the fetus has all its body parts including external genitalia. In the following weeks, the fetus will develop hair, nails, teeth and the excretory and digestive systems will continue to develop. At the end of the 12th week, the fetus is about 3 inches long and weighs about 28 grams.
During the 4-6th months, the eyes become more sensitive to light and hearing de, hearing develops. Respiratory system continues to develop. Reflexes such as sucking, swallowing and hiccupping develop during the 5th month. Cycles of sleep and wakefulness are present at that time as well. The first chance of survival outside the womb, known as the age of viability is reached at about 22 and 26 weeks (Moore & Persaud, 1998). Many practitioners hesitate to resuscitation before 24 weeks. The majority of the neurons in the brain have developed by 24 weeks although they are still rudimentary and the glial or nurse cells that support neurons continue to grow. At 24 weeks the fetus can feel pain (Royal College of Obstetricians and Gynecologists, 1997).
Between the 7th and 9th months the fetus is primarily preparing for birth. It is exercising its muscles, its lungs begin to expand and contract. It is developing fat layers under the skin. The fetus gains about 5 pounds and 7 inches during this last trimester of pregnancy which includes a layer of fat gained during the 8th month. This layer of fat serves as insulation and helps the baby regulate body temperature after birth. | https://courses.lumenlearning.com/lifespandevelopment2/chapter/prenatal-development/ |
Presentation is loading. Please wait.
Published byJennifer Tucker Modified over 4 years ago
1
Pregnancy and Childbirth
3
Why has the fertility rate changed over time? ●Better healthcare and nutrition. Fewer children die because of childhood diseases. ●Don’t need children to help work on family farms/businesses. ●People with more resources tend to have fewer children
4
Infant and Maternal Mortality Infant Mortality: # of babies that die at birth for every 1000 births ●4.3 in Quebec 2011 Maternal Mortality: # of mothers that die during childbirth for every 100,000 deliveries ●7.8 in Canada in 2009-10
5
Zygote, Embryo or Fetus??? TimelineName at this stage of development When fertilization occursZygote Weeks 0-8Embryo Weeks 9-40+Fetus
6
Embryo: weeks 0-8 First trimester begins with zygote, then embryo, and finally fetus (week 9). Brain, heart, limbs, eyes and spinal column begin to form.
7
Amniotic Sac Surrounds developing embryo, contains amniotic fluid which protects the embryo from shock Placenta Organ that grows attached to the wall of the uterus; attached to baby by the umbilical cord Umbilical cord A flexible cord containing blood vessels that carry nutrients from mother to the baby, and waste from the baby to the mother
8
Fetus: weeks 9-40+ Pregnancy is divided into three “trimesters”, each being about 13 weeks long. The baby is called a fetus from the middle of the 1st semester through the end of the 3rd.
9
People who assist with Childbirth ★ Obstetrician-Gynecologist (doctor) ★ Family Practitioner (doctor) ★ Nurse-midwife ★ Doula ** ★ Family members/friends **
10
Methods of Childbirth The three main methods: ❏ Natural childbirth - vaginal delivery, no pain medication ❏ Medicated birth options - vaginal delivery but use medical pain control methods ❏ C-section
11
Methods of Childbirth (cont’d) Other methods: ●Water birth ●Hypnobirthing ●Use of reflexology, meditation, chiropractic care, accupuncture to control pain or prepare for birth
13
Choice of Locations for Delivery A.Hospital A.Birthing Center A.Home
14
A. Hospital Locations for Delivery Reasons for choosing a hospital: ❏ Care given by specially trained doctors and nurses. ❏ Access to emergency medical interventions, including surgery ❏ Access to pain relief: IV, injections, gas, epidural, etc.
15
Hospital Birthing Room
16
Things to consider: ❏ Little privacy or sense of intimacy; many medical people coming and going ❏ May be pressured to have medical procedures performed ❏ Must follow hospital rules around eating and drinking, number of people able to be present, wearing hospital gowns, etc.
17
B. Birthing Center Location for Delivery ❏ Cared for by certified nurse-midwife ❏ Close to a hospital; can be transferred if necessary ❏ Comfortable, home-like environment; friends and family welcome ❏ Can eat and drink, wear you own clothes ❏ Usually option for water birth
18
Room in a Birthing Center
19
Things to consider: ❏ Must move to the hospital if complications arise ❏ Limited access to pain relief; no medications or epidurals available ❏ Often only available in larger towns/cities
20
C. Home Birth Locations for Delivery ❏ Chosen because of the comfort of being at home and in charge of the birthing process ❏ Very low percentage (less than 2%) of births in Canada ❏ Usually attended by a nurse-midwife
21
Set up for a birth at home
22
Things to consider: ❏ Need to be close to a hospital in case of complications ❏ No pain medication/epidural available ❏ Risk of infant mortality is 2-3X higher for home births vs. hospital or birthing center
Similar presentations
© 2020 SlidePlayer.com Inc.
All rights reserved. | https://slideplayer.com/slide/6117455/ |
We launch the call for applications to the UNESCO King Hamad Bin Isa Al-Khalifa Prize!
Article published on 04-11-2020
The UNESCO King Hamad Bin Isa Al-Khalifa Prize for the Use of ICT in Education, established in 2005 with the support of the Kingdom of Bahrain, recognizes innovative approaches in leveraging new technologies to expand educational and lifelong learning opportunities for all, and rewards individuals and organizations that are implementing outstanding projects and promoting the creative use of technologies to enhance learning, teaching and overall educational performance in the digital age.
The theme of the 2020 edition is the use of Artificial Intelligence (AI) to enhance the continuity and quality of learning. The Prize will award scalable AI-powered solutions or technology innovations that have proven effective in improving learning outcomes of marginalized groups while ensuring ethical and equitable use of these technologies in education. Special attention will be given to projects that provide access to education in remote areas or aim to improve the availability and affordability of connectivity for education and learning.
Who can apply?
Individuals, institutions, non-governmental organizations or other entities.
Eligibility criteria
The project should be ongoing for at least 1 year;
The project and its organization should not be affiliated to UNESCO or receive any funding from UNESCO;
The technology solutions used by the project should be designed completely for public good or for charitable purposes, meaning that they should not be the free parts of commercial applications or application packages that only offer limited functions free of charge while requesting users to pay for advanced functions.
Full details about the application process are available
here
.
Interested candidates must contact the National Commissions for UNESCO of their respective countries.
Recent posts
The brochure ”Ethics of AI. How smart can we use A ...
Read more
The National Geoparks Forum strengthens the relati ...
Read more
Virtual round table ”Ethics of Artificial Intellig ...
Read more
International Biodiversity Day, celebrated by Roma ... | https://www.cnr-unesco.ro/en/activity/we-launch-the-call-for-applications-to-the-unesco-king-hamad-bin-isa-alkhalifa-prize |
Partners for Possibility joins Education Innovation Summit 2019
Partners for Possibility (PfP) has been confirmed as a strategic partner for the upcoming Education Innovation Summit 2019. PfP is the flagship programme of Symphonia for South Africa (SSA), a national non-profit organisation, based in Bellville, Cape Town, with a bold and audacious vision: Quality education for all children in South Africa by 2025.
Founded in 2010, the PfP initiative is focused on enhancing the quality of education, improvement of the school environment as well as encouraging engagement between parents and teachers. Through the placement of school at the centre of community, the PfP programme believes that radical transformation can be achieved in the education sector.
The 4th Annual Education Innovation Summit will take place on the 29th May 2019 at the Hilton Hotel, Johannesburg. Hosted by IT News Africa, the summit will focus on the exploration of emerging educational technology trends and aim to provide solutions to technological challenges within education. Under the theme “Re-shaping education for a tech-driven future” the interactive summit will facilitate in-depth discussions and case studies from local and international experts, policy makers, academics, service providers and EduTech specialists- all sharing their expertise and experiences with emerging technologies in the education.
Key topics to be discussed:
Key topics at this year’s summit will include:
• Rethinking education and skills development in light of the 4th industrial revolution.
• Personalized learning strategies and data-driven student support systems.
• Opportunities and Challenges of AI in Education.
• Understanding the benefits of using IoT in education.
• Cyber security threats to education data systems.
• Addressing the risk of job losses through the use of robotics, Ai and automation.
• Disruptive technologies in education: Legal and ethical considerations.
• Improving digital literacy within educational environments. | https://www.itnewsafrica.com/2019/05/partners-for-possibility-joins-education-innovation-summit-2019/ |
Date: Thursday, 11 November 2021; 15:00pm SAST
Webinar Description:
Online assessment has proven to be one of the most important and complex challenges faced by university educators during the pandemic. Designing online assessment requires knowledge of assessment models and design, consideration of ethical issues, factoring in student access issues, and navigating the affordances and constraints of several assessment tools. Educators who were familiar with the assessment practices in standard contact teaching and learning often faced a rapid and steep learning curve.
We will ask our panelists to tell us about some of their online assessment experiences including:
- Assessment principles that informed their choices
- Examples of online assessment designs for specific contexts and disciplines
- How they chose and used appropriate tools.
- Success factors for online assessment
There will be opportunities for questions and other contributions by participants in this event.
To join part 1 of this conversation please sign up via Zoom
Friday, 12 November 2021: Online Assessment Conversations – Part 2 (Examples, Proctoring: ethical debate and experiences) Save the date – more info with a separate sign up form coming soon
Dr. Nicola Palitt coordinates the efforts of the Educational Technology Unit in the Centre for Higher Education Research,Teaching and Learning (CHERTL) at Rhodes University and offers professional development opportunities for academics to use technologies effectively in their roles as educators and researchers. Nicola supervises postgraduate students and co-teaches on formal courses in Higher Education. She enjoys meeting EdTech practitioners and researchers from across the globe.
Frida Ngari is the Co-founder and CEO of Firefly Edtech Solutions.
She joined the education sector full-time in 2015 and has worked in professional colleges and universities in Kenya and Rwanda. Since 2018, Frida has driven Firefly’s mission and product strategy, creating and promoting Online assessment design and practice.
She is a fellow of Alibaba e-Founders Fellowship and a recipient of the 2017 Carnegie/Mellon Educational Technology Scholarship Award in South Africa. Frida is a member of e/merge Africa promoting facilitation of digital teaching and learning in Africa
Prinavin Govender is a Lecturer at the Durban University of Technology in the Information Technology (IT) Department as a Computer Science Academic. He lectures students at first and second year level in Internet programming and IT Skills.
Neil Kramm is based at the Educational Technology Unit in the Centre for Higher Education Research,Teaching and Learning (CHERTL) at Rhodes University. Neil is passionate about academic development and infusing technology into teaching and learning. He enjoys working with innovative technology solutions and implementing tech solutions that enhance teaching and learning. He is an active researcher on issues relating to educational technology, in particular, how technology is incorporated into teaching and learning. | https://emergeafrica.net/11-november-2021-online-assessment-conversations-part-1-principles-examples-tools/ |
Share your thoughts via email or on the Web Form below.
• Advancing research and education in public policy and ethical considerations to help ensure that new technologies are conceived and implemented in support of the greater good.
The SHASS leadership and faculty are engaged in the mission of the college of computing. We will very much welcome your ideas about the work of the college — in particular ideas about how MIT's humanities, arts, and social science fields can help inform the design of computing and AI tools; benefit from such tools in research and teaching; and help advance public policy and ethical safeguards for new computing and AI tools. Contributions can be anonymous, and will be shared only with your permission.
Via Email: Send your ideas to Dean Melissa Nobles at: [email protected].
Via Web Form: Share your ideas on the form below. | https://shass.mit.edu/Alumni-Ideas-Ethics-of-Computing-and-AI |
The Department of Home Affairs (DHA) has formalised a series of regulatory principles aimed at helping businesses and governments alike to secure supply chains for critical technologies such as Artificial Intelligence (AI) and quantum computing.
The final, non-binding Critical Technology Supply Chain Principles were outlined in a report released on Monday, following a period of industry consultation and co-design. This follows an earlier release of draft principles in October 2020.
“Critical technologies” are defined by the DHA as “current or emerging technologies with the capacity to significantly enhance or pose a risk to our national interest”, and cover both digital and non-digital (for example, synthetic biology) technologies.
The 10 newly released principles are intended to help businesses eliminate unknown risks when developing critical technologies and make better decisions about key suppliers, thereby, it is hoped, strengthening business resilience.
Acknowledging Australia’s role as a “world leader” in advanced manufacturing, the DHA affirmed that Australian industries are “keen to invest in emerging technologies”.
The report recognises, however, that “overseas markets supply many of our technological requirements and Australia imports many technologies and components that we are not best placed to produce locally”.
“To facilitate increased investment and resilience, we need to ensure enduring access to a diverse, secure and trustworthy supply of critical technologies.”
The DHA’s principles align to guidance from the Australian Signals Directorate’s Australian Cyber Security Centre and are grouped under three pillars: security-by-design, transparency, and autonomy and integrity.
Agreed principles under the secure-by-design pillar include that organisations understand what needs to be protected, as well as the risks posed to their supply chains, and ensure that appropriate measures are taken to build security considerations into in-house processes.
By practicing security-by-design in service development, DHA explained that “customers do not need to have expert knowledge” and “are not unfairly transferred risk that they are not best placed to manage”.
Under the transparency pillar, meanwhile, agreed principles include understanding suppliers’ security safeguards, and communicating minimum transparency requirements to suppliers, in line with existing international benchmarks.
Finally, under the autonomy pillar, agreed principles include understanding the influence of foreign state actors (if any) on suppliers, building strategic partnerships with key suppliers, as well as considering the ethical conduct of suppliers, in line with international legal standards.
To encourage businesses to adopt these principles, Home Affairs, in its report, underscored greater uptake of new critical technologies, rising consumer confidence, and improved supplier relationships as key benefits from doing so.
As a first step, DHA encouraged businesses to implement these principles in-house, as well as with their direct suppliers, and that they maintain an “expectation that those suppliers are doing the same.”
“The Australian Government will lead by example and use the principles in its own decision-making practices,” Home Affairs Minister Karen Andrews said.
For her, secure supply chains are part of Australia’s “long-term access” to secure cutting-edge critical technologies.
“Adhering to these Principles will help businesses of all sizes ensure their decisions about critical technology supply chains align with Australian values,” Andrews added.
The principles are intended to complement other reforms around “systems of national significance”, including the Security Legislation Amendment (Critical Infrastructure) Bill 2020, which has just been brought before senate.
The bill proposes reforms to the Security of Critical Infrastructure (SOCI) Act 2018, mandating that critical infrastructure providers (across 11 designated sectors) enhance security and resilience through a host of risk mitigation, due diligence, and governance obligations.
Elsewhere, the United Kingdom and New Zealand have similarly released supply chain security principles, providing high-level advice for market participants; the United States, meanwhile, has had a supply chain security strategy in place since 2012 but has yet to provide business-specific advice. | https://fst.net.au/government-news/govt-releases-guidelines-to-secure-critical-tech-supply-chains/ |
forbes.com As those in data science know, datasets are necessary to build a machine learning project. The dataset is used to train the machine learning model and is an integral part of creating an efficient and accurate system. If your dataset is noise-free (noisy data is meaningless or corrupt) and standard, your system will be more reliable. But the most critical part is identifying datasets that are relevant to your
equaltimes.org The term Artificial Intelligence (AI) is a somewhat catch-all term that refers to the different possibilities offered by recent technological developments. From machine learning to natural language processing, news organisations can use AI to automate a huge number of tasks that make up the chain of journalistic production, including detecting, extracting and verifying data, producing stories and graphics, publishing (with sorting, selection and prioritisation filters) and automatically tagging articles.
May 2021
iotforall.com AI/ML helps developers create better and lower-cost IoT end nodes that will benefit an ecosystem where their products exist. The benefits of AI/ML are far deeper than simply that of better decision-making in the end node; some optimizations come about bringing valuable benefits to all involved, including the consumer, the developer, and the operator. Read more
forbes.com Artificial Intelligence (AI) is becoming an integral part of the tech world. It is revolutionizing science, healthcare and our daily lives more than we would have imagined. From speech recognition and chatbots to self-driving cars, deep AI is playing a pivotal role. Although AI is the future of the world, training AI algorithms still depend on powerful computers, which consume significant energy and emit considerable carbon emissions. Read more
May 2021
globenewswire.com Artificial intelligence in retail can be defined as the process of gathering and analyzing massive amounts of data that help companies to create personalized shopping experiences for customers via recommendation engines, highly structured web shops, intelligent in-store bots and chatbots. Read more
May 2021
weforum.org While consensus starts to form around the impact that AI will have on humankind, civil society, the public and the private sector alike are increasing their requests for accountability and trust-building. Ethical considerations such as AI bias (by race, gender, or other criteria), and algorithmic transparency (clarity on the rules and methods by which machines make decisions) have already negatively impacted society through the technologies we use daily. Read
eleconomista.com.mx Se espera que los compradores globales gasten un 20% más en línea en 2021 que el año pasado, según un informe de Adobe publicado el martes, lo que agregará probablemente presión a las cadenas de suministro mientras luchan por mantenerse al día con un aumento en la demanda. Leer más
elmundo.es En el día a día de los ciudadanos, la inteligencia artificial ha demostrado su progresiva implantación sobre todo mediante sus aplicaciones derivadas para la voz. La tecnología consiste en la imitación de funciones cognitivas humanas por parte de máquinas, y pocas actividades lo representan de momento mejor que un asistente virtual. Leer más
Abr 2021
nytimes.com Sometimes it feels that people talk about artificial intelligence as if it’s a magic potion. Yes. The original sin of the A.I. pioneers was that they called it artificial intelligence. When we hear the term, we imagine a computer that can do anything people can do. That wasn’t the case in the 1950s, and it’s not true now. Read more
news.mit.edu In a perfect world, what you see is what you get. If this were the case, the job of artificial intelligence systems would be refreshingly straightforward. Take collision avoidance systems in self-driving cars. If visual input to on-board cameras could be trusted entirely, an AI system could directly map that input to an appropriate action — steer right, steer left, or continue straight — to avoid hitting a pedestrian
industryweek.com Strategic collaboration looks to AI and machine learning to turbocharge the effectiveness of the design process. Like many progressive manufacturers, Rolls-Royce works with large amounts of expensive data, and the use of AI and advanced data analytics have been at the heart of its business for more than 20 years. As part of its IntelligentEngine vision, this collaboration aims to connect AI and engineering even closer to derive business value. | https://linkgua.com/category/inteligencia-artificial/ |
Self-Isolation Diary: A Day in the Life Of Hudson’s Bay’s Tyler Franch
"This has been a great opportunity for me to slow down and dedicate time to the little projects I've always wanted to do around the house."
We’re officially in Month 3 of self-isolation with an uncertain road ahead but for some much-needed inspiration, FASHION is reaching out to some of our favourite Canadians to get a peek into how they’re living their lives in lockdown (remember: #StayHomeSaveLives). Each week, keep an eye out for new self-isolation diaries from actors, designers, influencers and artists who are riding this uncertain time out with us.
Tyler Franch, Fashion Director, Hudson’s Bay
I live with my partner, Elie, who has been integral in helping maintain my sanity! I’m a fairly social person so not being able to be with friends and family has been difficult. The occasional happy hour Zoom call is nice, though.
I make sure to organize my day the same way I would have pre-COVID. I need time in the morning for a coffee and smoothie while I catch up on the news and emails. In between Zoom meetings and strategizing for the Fall season, I make sure to take a proper lunch break outside of my workspace. It’s essential to not be online 24/7 so I also try to mark the end of a workday by going for a walk or getting down to the water for a cycle.
Being in the business of fashion, so much of what we do relies on using our senses. From touching and feeling fabrics to a specific scent and the feeling you get in a showroom—removing that tactile experience has proven challenging (but not impossible!). As the Fashion Director at Hudson’s Bay, this time has given me the chance to focus on the future. Connecting with our brands and designers at this time has allowed me to have some of the most productive conversations around planning for an optimistic Fall season.
In terms of physical exercise, doing yoga 3-4 times a week has been helpful. And I’m lucky to live close to a park so I make sure to leave the house once a day for a walk and some fresh air. I downloaded the Picture This app to help identify trees and that’s been oddly fun.
This has been a great opportunity for me to slow down and dedicate time to the little projects I’ve always wanted to do around the house, like giving my patio a facelift and organizing my closet. The extra time in the morning and evening has allowed me to add some more steps to my skincare routine and spend time exploring new beauty brands and tools that have given me some amazing results.
I finished Ozark in about five days and it was a delicious binge. I haven’t seen some classic 2000s shows like Six Feet Under and Breaking Bad so I’m working my way through those now. Thankfully, we also have new episodes of Drag Race every Friday (which I watch on Zoom with all my friends) that give us the much-needed dose of eleganza we’re missing right now. In terms of books, I’m reading Sleeveless, a book of essays by Natasha Stagg and just started a book club with friends. Our first book is History of Violence by Edouard Louis.
I think humans are incredibly resilient. Creating new habits and routines and making sure to maintain a running list of things that I’m grateful for has been essential in ensuring I maintain a positive mindset that allows me to see the silver lining in our new reality.
Change isn’t coming, change is already here; from changes that pave the way for us to be more conscious consumers to having intentional and thought-provoking conversations at work about ways to continually innovate our business. It’s uplifting and incredibly optimistic to hear all the hope and conversation around change. I’m confident no matter who you are or what you do that we will come out of this better, whatever that means to you. | https://fashionmagazine.com/wellness/tyler-franch-self-isolation-diary/ |
Steve Salter is best known as the Fashion Features Editor of i-D, but it was his menswear blog, Style Salvage, that first put him on the fashion map. Part of the 2007 blogging wave, Salter segued into mainstream fashion media via a spate of digital roles. He started at Dazed as Digital Sales Executive, later moving to i-D as Online Editor. Before taking over the Fashion Features role from Anders Christian Madsen in 2017, he also acted as Social Media Editor and Digital Editor. Steve was the last person employed by i-D founders Terry and Tricia Jones before they sold to Vice in 2012.
Writing with one eye on fashion and the other on the world, Steve manages to make heavyweight topics like cultural appropriation as accessible and engaging as lighter notes on meme fashion and microbags. His degree in Law and Sociology might seem irrelevant now, but his love of words and debating remains. Plus, it was academic boredom that first pushed him towards journalism: his first published pieces were music reviews for local zines and independent publishers in his university town of Warwick.
What attracted you to i-D?
I grew up in a small seaside town – Margate – which is popular now, but when I was there, it was the arse-end of nowhere. Reading about London and the creative scene in magazines like i-D was my escape. I knew I had to be there. Even when I accepted the Law degree, it was all part of the plan of getting that bit closer to London. I moved as soon as I graduated, worked in a few marketing jobs but soon transitioned into fashion. Being offered a role at i-D was everything, it felt like home then and still feels like home today.
Do you think it’s true that to be a fashion journalist in the UK, you have to live in London?
Location shouldn’t be a barrier today, so we’re trying to break that a little bit. Historically so much of British fashion has centred around London – as it has done with other capital cities around the world – but in this hyperconnected world, why should it? Everything is under a microscope in London but we all have to look beyond our immediate surroundings and discover the unknown because that’s where we find the best inspiration. It’s a case of us all having to look beyond our surroundings. Of course it helps when everyone is in one city because you meet people and it’s easy because everyone knows each other, It’s such a community and that’s beautiful, but if you’re coming from the outside then it’s hard to break through that. The internet has helped somewhat, but there’s still a way to go and that’s on the people who commission features.
How has the menswear scene changed since you first came to London?
When I started covering men’s fashion on the blog, it was really around the birth of London menswear as we know it know but it was still a tally-on to the women’s show in those early days. At that time, Fashion Week was around the grounds of the Natural History Museum in South Kensington, and Fashion East had a house just opposite that. It was an old Georgian townhouse, and they gave designers a room each to do an installation. Those still rank as some of my favourite fashion events. It’s where I first encountered Gosha Rubchinskiy, Nasir Mazhar, Meadham Kirchhoff, and so many more. I was so fortunate that I was one of the few journalists covering it at the time. Well, I wasn’t really a journalist, I was a blogger. It felt like we were at the start of something and, looking back now, we were. I love covering the established designers and big fashion houses today, but I can’t imagine a time when I’m not excited by the energy of emerging talents, it’s infectious. i-D has always been about new ideas, new talent, and new energy so that’s why we’re a good fit.
You just touched on it a little bit, but I’d like to talk more about your transition from ‘blogger’ to ‘journalist’ – what is the difference in your mind? And when did you start calling yourself a journalist?
I don’t know whether there is one to be honest. It’s just a title change. My Instagram bio says: a fan, not a critic. That’s something that Terry Jones actually said at i-D and it’s one of the reasons I feel so at home there. As a blogger, I only covered what I loved and as a journalist, I only cover what interests me because there’s enough negativity in this world already. If it’s not for me, I either ignore and move on, or if it’s there’s something there, then I commission it out. It’s about being honest, but it’s coming from that mentality of only really writing about stuff that you really like. That’s what I did with my blog and it’s what I try and stick to now.
It sounds so obvious but a piece always reads better if it’s written by someone who is knowledgeable about and excited by the subject. If that’s not me, then I’ll just commission it out.
It seems like we’re having a renaissance of the freelance writer, but so many of my friends that are freelancers joke about being terrible at pitching. Being able to pitch a feature is such a good skill. Someone might say they want to do a feature on a designer, but if they just googled it, they would see that we’d just covered them. So what I want to know is, what’s the hook? Why are they the best person to write it and why do people need to read about it?
I think your career trajectory is really interesting. How has your early digital focus shaped the way you approach journalism?
I would say that it’s been shaped by digital and my various roles across the business have helped hone a unique perspective but it’s always been powered by real-life interactions. Throughout my time at i-D, I’ve always gone where I felt I was most useful to the business. When I joined, there were two editors – Sean [Baker] and Sarah [Raphael] but i-D was a very simple Tumblr site back then. Terry and Tricia brought me on to help relaunch it but then they sold it to Vice. In those early days, we would only post articles on social media occasionally, the digital strategy was in its infancy. Once we joined Vice, we needed to grow our audience, and social media was the way to do that so that’s what I moved into and it was exciting because I, along with our former Editor-in-Chief Holly Shackleton, was shaping the tone and voice that enticed new readers to the site.
However, after a couple of years focussed on digital growth, I had a bit of a mental breakdown, just because of the relentlessness of it all. It’s easy to get too obsessed with the numbers side, because you’re tracking it in real-time and it’s addictive. During my time as Social Media Editor and then Digital Editor, we saw figure growth with the help of Vice and we truly became a global voice.
Aside from the obvious points – more daily content, a more immediate interaction with readers etc. – how do you think social media has changed fashion journalism? Has it changed the way people write?
Definitely. I feel guilty for it because when I watch shows now, there are times in which I think about it in terms of: what’s going to be the headline and what’s going to be the sell? You’re kind of writing the piece as you look at it because that’s the reality of fashion’s pace today. It extends beyond fashion week coverage too. Social media has essentially homogenised how content is presented and there’s been little opportunity to escape clickbait culture but we’re all increasingly moving away from that world. How many features have you been drawn to because of a really good title or tweet only to discover that there’s not much else to it. It’s all smoke and mirrors and it can perform well in the short-term but people are beginning to wake up to just how problematic it can be in the long-term.
That leads nicely into my next question, actually. How relevant do you think good quality fashion writing is to online audiences?
There’s been so much dumbing-down across the media landscape, just to reach a wider audience but I always think it’s our role to educate. Going back to your earlier question about personal milestones, I loved it when tutors at LCF or CSM told me that the blog was on their reading list. I want everyone that arrives on i-D to take away something new and ideally, involve them in a conversation. It’s about immersive storytelling across platforms because our work extends beyond published features alone. It’s about making the most of our access too, we as fashion journalists have to take audiences on a journey inside our fashion world. Beyond being presented what’s shown on the catwalk, our audiences, are interested in discovering the human stories behind fashion week. By this, I mean the designers, the models, the behind-the-scenes creatives, the assistants and everyone beyond too. They want to hear about the fascinating, the underrepresented, the challenging, the amazing, the funny, and the weird. We should inform, critique, challenge and entertain, involving our audience in each.
Have you noticed any shifts since the beginning of your career in what you want to write about and what people want to read about?
I guess putting fashion through a wider cultural lens. When events happen like the Met Gala, that doesn’t really interest me all that much personally – what celebrities wear to events – but that is something readers always like to see and know about. I just try and think, why would you come to i-D specifically for that story? I think that’s also what’s happened with the digital explosion: publications have lost their tone of voice and their perspective a little bit because everyone is covering very similar topics and talents. And it filters down to print as well – who magazines have on their covers, how things are angled, the language used in headlines – everyone looks at what everyone else is doing. That’s the biggest shift I’ve seen, but I hope that’s going to change. Especially now publications like Vogue are considering paywalls. They will no doubt need to hone in on who their reader is and what they want or it won’t work.
What, in your view, is the i-D tone of voice? What is the filter that you put on a story to make it i-D?
I guess it’s that fans not critics mantra. Informed informality. We’ve always joked it’s like the slightly older sibling, maybe, who is either encouraging their younger sibling into a new scene – you might like this, you might like that – and giving them the encouragement and platform-sharing to feel like they can be a part of it too. i-D shouldn’t be exclusionary, it should be celebratory.
You mentioned putting fashion through a wider cultural lens – I think that’s something i-D do really well. It’s never been shy about discussing gender, sexuality, race, politics…
One thing that does annoy me actually is how fashion journalism in the wider sphere isn’t seen as real journalism. Someone said that fashion journalism is closer to fashion than it is to journalism, and I think that’s true, but that doesn’t mean that it can’t evolve and become something more than that. We’re seeing the likes of Vanessa Friedman at the New York Times and Matthew Schneier at New York Magazine who are doing some really interesting pieces and exposés. I don’t think i-D will necessarily do that, but we can push for positive change.
Yeah, I think what i-D does well is intellectual fashion coverage without intellectualising it.
Exactly. Whether designers are being political in their work intentionally or not, there is always an element of that, because we can’t separate ourselves from our environment. It goes back to what i-D is, it’s about identity.
How do you integrate politics in a meaningful way without losing engagement or alienating people who don’t want fashion to be overtly political, and also without it coming across as tokenistic? Especially with digital pieces, where the headline has to be so succinct and the piece itself is more snappy.
It can never feel forced. It’s really difficult because, with social media, someone might come to an online piece with no prior understanding of what i-D is and they might have never read another article on there. I think that’s the difficulty of commissioning out opinion pieces too, because it’s obviously the opinion of the writer, but it’s also the opinion of the publication. It’s a balancing act. Within the piece, you have to make sure you’re linking to a number of other pieces that are part of that wider argument. It doesn’t all have to agree, but it needs to give a wider perspective of the issue.
You must spend the majority of your time thinking, talking or writing about fashion. What do you think are the most pressing conversations to have in fashion right now?
Sustainability. We have to make fashion a cleaner industry. So many designers are beginning to think about it now and fashion is getting better at responding to criticisms, like Extinction Rebellion’s, but there’s so much work that needs to be done. It’s a wider conversation that the industry needs to have in terms of the post-production too and how we cover stuff. We send people all over the world to see the shows, to cover launches, to profile talent. Is it all necessary?
Yeah. We often talk about fashion’s impact on the environment, but one thing that gets glossed over is the fact that journalists and models fly a lot.
Exactly. I hope the industry realises that and changes accordingly, especially since far-flung destinations – like Marrakech for the Dior cruise show and Prada showing men’s in Shanghai – seem to be becoming the norm. People who don’t really get fashion always ask me, ‘so how long is the show?’ or ‘what are you actually doing there?’ You know, I was in Marrakech for two or three days, but the show itself was about 20 minutes long. Of course, it’s a wider experience that builds the narrative but it’s such an expense of resources and there are many ways to tell stories.
I guess the other side of keeping fashion clean is, how do you maintain journalistic integrity when you’re being given so much free stuff by brands? And maybe that feeds into the ‘fan not a critic’ thing as well – a sort of criticism by exclusion?
It’s really tricky. My point of view is that it’s okay to go on the press trips if it’s transparent. The gifts thing is a whole other thing. I often get emails asking for my shoe size or something and it’s like, what are you sending me? It feels weird. I’ve got a very particular style and, this sounds ungrateful, but I don’t want random things sent to me and I’ve been sent some really weird things.
What’s the weirdest thing you’ve been sent?
Just really colourful things and crazy hype hybrids of things that just aren’t me. I end up giving them to people in the office who can appreciate them.
It all feeds into the discussion about sustainability and how much unnecessary stuff we produce and consume. It’s not just the press and influencers being sent gifts, it’s the acceleration of the fashion calendar and the high turnover of designers too. If we’re talking about sustainability more broadly or holistically, how do you think we can make fashion more emotionally sustainable for the people within it?
It’s difficult, because so many designers in recent seasons have pushed back on the sheer volume of fashion. That’s probably one of the biggest changes I’ve experienced in the industry – it’s gone from two seasons to four seasons to everything else in between, collaborations and stuff. I look at people like Virgil [Abloh], Kim [Jones] to an extent as well, and I think, how are you doing it? Obviously they’re part of a bigger team, but they do so much. Some people seem to thrive on it, but it doesn’t seem sustainable to me and an increasing number of designers are finding a way forward that works for them. I don’t personally thrive on the relentless pace. I need time to breathe and comprehend a little bit. Hopefully there’ll be a slight pushback on that, I think there has to be.
Yeah, especially when mental health has become such a big topic within fashion, and you’ve obviously had your own struggles with that.
Yeah, completely. I think the biggest thing I’ve had to navigate is being a writer and dealing with any kind of depressive or anxious episodes. Fashion isn’t really an industry that’s great for it, because things can happen at any time and it demands a lot. I always liken mental health to the weather really, and I’m aware of certain triggers and things now, but it could happen whenever. I’ve missed deadlines because of mental health struggles, but it’s something that i-D has been great at understanding and I think the wider industry is waking up to it now too. The more we talk about these issues, the better it is.
Social media and mental health seem to be closely linked. Especially this idea that, as a writer, you have to have an online persona – a ‘personal brand’ if we’re being that synthetic about it. How would you advise young journalists dealing with that?
I used to be a lot more active across social media during my blogging days but I’ve moved away from the infinite scroll because it can easily become too much. My advice to any young journalist is just to really think about what you’re posting and why you’re posting. I’ve always seen social media as an extension of what you’re writing and how you’re presenting yourself to the world but just do what’s right for you.
One final thing. I found an old copy of i-D where you said that your advice to your 16-year-old self would be: don’t do what you think you should do, do what you want to do. Would you stand by that advice now?
Totally. That goes back to the fact that I had no idea what I wanted to do when I was 16 – or 18 or 21 for that matter – and I might still not know exactly now. It’s always transient, it’s always changing. It sounds really cliche, but do what you want to do, not what your parents want you to do or what other people expect of you. It’s your life. I’m realising as I get older that we put so much focus on what our first feature in print is going to be or something, but it doesn’t matter. Things are forgotten pretty quickly. | https://bellawebb.com/2019/07/02/1granary-interview-steve-salter/ |
What does “Happy Holidays” Mean To Trinity and its Multi Religious/Cultural Student Body?
The Undecided Future of Postgraduate Representation in Trinity
More Stories
Trinity’s New Graduation Protocol: What Does it Mean for Students?
Take a Dive with Basking Sharks: Conserving Ireland’s Giant Prehistoric Fish
Ireland and the United States: Responding to Citizens’ Reproductive Needs
SEE ALL IN FOCUS
More from In Focus
Interviews
Opinion
When Commuting Gets Competitive
When Will the Government Learn from the Hardships of the Housing Crisis?
More Opinions
Fourth Year Brings as Many Questions as Answers
Generations Don’t Exist: Why Do We Still Use Them?
Freshers’ Week Is A Missed Opportunity For College To Help Its Newest Students
SEE ALL IN OPINION
More from Opinion
Columns
Editorials
Profiles
Op-Eds
Love Interest
Sport
Trinity Meteors Fall to Defeat at the Hands of DCU
Paul and Stokell: the Trinity Cricketers who Helped Catapult Ireland to a World Cup
More Sport
Lively Lansdowne Locked Down by DUFC in 17-27 Win
DULHC Outclassed by Quality Corinthians in ‘Super Saturday’ at Santry
New Trinity LGFA Coach Adamson Seeking to “Unlock the Potential”
SEE ALL IN SPORT
More from Sport
Rugby
Soccer
Rowing
Fencing
Magazine
Living at Home During College: Is It Worth It?
A Tale of Two Suburbs
More Magazine
займ на карту
What’s Next for Dublin’s Music Scene?
The Impact of Korean Pop Culture On Western Fashion
A Life of Fanfare Laid to Rest: Visiting the Queen’s Resting Place
SEE ALL IN MAG
More from Mag
Fashion
Music
Radius
ALL
SECTIONS
News
In Focus
Comment & Analysis
Editorials
Columns
Op-Eds
Contributions
Profiles
Love Interest
Sport
Radius
Magazine
Fashion
Trinity’s Fashion Powerhouse
Sinéad Burke spoke to students on accessibility, Instagram fame and her Mam's objections to her more avant garde outfits.
By
Aoife Murray
Brimming With Eclecticism, Fashion Trail Champions Local Creativity
Celebrators of independent fashion retailers were treated to a rich fashion trail, entitled Get Up, around Dublin's creative haunts on Thursday.
By
Amelia O’Mahony Brady
Trinity Trend: Dad Trainers
How can something once so utterly repellent now be the most fashionable pair of shoes one can own?
By
Aoife Murray
‘McQueen’ Dazzles and Devastates in Equal Measures
After its release earlier this year, 'McQueen', a documentary about the rebel king of fashion, is being screened in the Pavilion Theatre today.
By
Eleanor Scott
Trinity Trend: Tote Bags
The humble tote bag is having its moment in Trinity.
By
Niamh Kennedy
All That Glitters is Gold, at Fashion Soc Design Showcase
An eclectic mix of students, graduates and local fashion brands gathered in Regent House last night to display a variety of style offerings.
By
Aoife Murray
Speaking With: Shoe Lab
We speak to shoe designer and Shoe Lab owner Neil McCarthy about the closure of this beloved Dublin shop.
By
Aoife Murray
Style Without the Price Tag, at Dublin Fashion Festival
Featuring clothing from high-street outlets and local boutiques alike, last weekend's fashion extravaganza was both accessible and authentic.
By
Aoife Murray
Creative Culture Celebrated by Ace & Tate
Eyewear brand Ace & Tate launched its Exchequer St store with an ode to Dublin's cutting-edge cultural scene.
By
Eleanor Scott
Dublin’s Fashion Festival Champions Young Designers
On September 12th, Ireland's leading fashion influencers will gather in the Mansion House to kick off the 2018 festival. | https://universitytimes.ie/category/fashion/page/8/ |
ELMNTL is a full-service marketing and communications agency serving innovative brands, both global and local. We tell stories, touch audiences and drive business growth.
ELMNTL employs a goal-oriented strategic process focused on clearly identifying measurable outcomes, and building a roadmap to achieving those goals using any channel necessary. Constant collaboration between our organization and our client is critical for the success of every engagement.
Our practice can be broken down into 3 Primary Areas:
- Creative: Branding, Design, Writing, Strategy
- Communications: PR, Influencers, Experiential
- Marketing: Social, SEO, Advertising, Email
We pride ourselves on the diversity of our team members. Hailing from all over the world, our team truly has a global perspective on message clarity for an international audience and storytelling from a local point of view. Our staff are located in New York, Dallas, Sao Paolo and Manila.
Our sectors include:
- Hospitality
- Tourism & Travel
- Fashion
- Leisure
- Healthcare
- Food & Beverage
- Construction
- Real Estate
- Finance
- Technology
Our clients include: | https://wpengine.com/partners/agencies/elmntl/ |
Last month Thomas Kolster, author of the critically acclaimed book, Goodvertising, compèred a packed SB’16 Copenhagen preview event featuring insights on Behavioural Economics from Krukow, the launch of a new sustainable packaging solution from Carlsberg and the announcement of a new circular-economy business direction for IKEA.
Following the event, I sat down with Kolster to get his views on his key focus areas for the year ahead, explore the changing role of ‘advertising for good,’ and discuss the sustainability landscape in Scandinavia, where Sustainable Brands’ new Northern European conference will take place later this year.
What do you think are the key themes in sustainability for this coming year?
I think breakthrough innovation is a term on everybody’s lips at the moment - in particular the merging of physical, digital and biological spheres. You only need to look at the success of companies like FitBit to see how that’s going to grow over the coming months.
Exploring new business models is another core topic. If you look at brands like BMW and Philips, they’re moving their focus away from products like cars and lightbulbs and towards services like mobility and illumination. Today’s consumers are far more independent - they’re able to print their own products and generate their own electricity – I think that is going to be both an opportunity and a challenge for businesses to adapt to.
Storytelling is one of the enduring themes that continue into this year. We’ve reached a point in time where the ability to communicate purpose is instrumental to the success of any new venture. In the past we’ve seen many great sustainability initiatives struggle because they weren’t communicated in a passionate and engaging way. Storytelling remains the fabric that facilitates and inspires change.
What are the opportunities and challenges particular to the Scandinavian region?
There are some amazing things happening in Scandinavia at the moment. In Denmark, particularly, the fashion industry is doing pioneering work around sustainability. We have the Copenhagen Fashion Summit happening in May, and the Danish Fashion Forum and Danish Textile Union have been working hard to try and improve transparency in the industry and tackle social and environmental problems.
However, I think one of the core challenges for the Scandinavian region is that culturally we’re quite a modest people, we don’t tend to brag about our successes. It’s something we call in Denmark ‘The Law of Jante’ and it’s all about being humble, not making a fuss. Along with the language barrier that exists with us and non-Dutch speaking nations, it can make talking about sustainability achievements difficult. A lot of times I go into Scandinavian businesses and see a treasure chest of good stories that haven’t been told.
What role will advertising play on the journey ahead?
No matter where we are - Denmark, Scandinavia or anywhere else in the world - we have an opportunity to learn from each other. The solutions are out there, we just need to scale them, and advertising can bring those stories to a wider audience.
Historically, advertising has also had the ability to shape trends, and I think that’s a responsibility we need to take far more seriously if we want to create widespread change. We can’t leave it up to consumers - as Henry Ford put it: If I had asked people what they wanted, they would have said faster horses.'
Encouragingly we’re already seeing some brands stepping up to that task. Take Chipotle for example, with their recent “Friend or Faux” campaign. I’m working on the next edition of Goodvertising, looking at the rise of these challenger brands, these mega-brands that go up against the narrative. I think the work they’re doing right now is planting the seeds for the brands of the future. | https://sustainablebrands.com/read/stakeholder-trends-and-insights/kolster-brands-of-the-future-will-lead-by-challenging-sustainability-narrative |
By the 1980s, the suit as a women’s power uniform was in full swing, fronted today by German Chancellor Angela Merkel and former US Secretary of State Hillary Clinton but it was still very much an Anglo-Saxon sartorial trend but it’s making its way to Asia.
Follow These: 5 Fashion Influencers Making Waves
We take a look at five of the most influential bloggers who have brought more attention to brands and a runway than any publication has in recent time.
Model to Watch: Hailey Baldwin
The 19-year-old daughter of Stephen Baldwin, and niece of actor Alec Baldwin, may be the next rising star of modeling.
British Vogue gets its own documentary on the BBC
BBC Two has commissioned an observational documentary about British fashion magazine Vogue as it nears its centenary year. | https://www.luxuo.com/tag/vogue |
The Harlem Renaissance • Began in Harlem NY in the 1920s • An increase in black pride led many black intellectuals to write work portraying the lives of the African American working class. • These writers used European literary styles to express their ideas.
Jazz, Blues, and Theatre • Jazz and Blues were an new form of music that was a product of the Harlem Renaissance. • During and after WWI, musician from New Orleans and Miss. Brought their talents to large cities of the North. • There they found a receptive audience of both whites and blacks.
Key Figures of the Renaissance • Langston Hughes (author) wrote memorable plays, poems, and short stories about the black experience in the US. • Zora Neal Hurston (writer) wrote the 1st major stories about African American women.
Racial Conflicts • African Americans were denied most all opportunities for advancement. • Eager for solution, many joined Marcus Garvey’s BACK TO AFRICA MOVEMENT • Though few left the US, the movement inspired unity among blacks. • Riots occurred in major Northern cities as mobs of whites invaded black neighborhoods and began killing blacks in anger for taking away low-paying jobs. | https://www.slideserve.com/oliana/african-american-culture |
“The Lost Generation” is the nickname stamped on the 1920s generation of young American writers, intellectuals, and artists. As they came of age in the “Roaring Twenties”, following the first World War, also known as the Great War, they were raised in a time of absolute uncertainty.
With the post-war time period posing such a drastic turning point in their personal belief systems, governmental trust, and personal artistic reputation in the United States, their lack of purpose stemmed from feeling “lost” from growing up through so much pointless death, and their art derived from losing faith in traditional foundational values such as courage and patriotism. Some of the most famous members of the Lost Generation included Gertrude Stein, Ernest Hemingway, F. Scott Fitzgerald, Ezra Pound, E.E. Cummings, and T. S. Eliot.
The First World War
World War I was a time that shook America to its core as a young country. Seeing the carnage left behind by countries that deemed themselves as civilized, after a war that still didn’t seem to be won, cultivated a deep impression in the youth of that time. Many young people participated in the war, however, in support of the country, and as a result, millions lost their lives.
Who Were the Lost Generation Writers?
The work by many of the Lost Generation has been studied for decades for their large curiosities. For example, F. Scott Fitzgerald, Ernest Hemingway, Gertrude Stein, T.S. Eliot, John Steinbeck, and John Dos Passos were part of the group of American writers whose works heaved worlds of conflict that they surely felt moved by. The Great Gatsby and For Whom The Bell Tolls are some of the most well-known works in American literature throughout the 20th century. These writers wove the cultural decadence and concepts of moral corruption into their stories, putting plainly, what they were witnessing during their lifetimes.
The name “the Lost Generation,” was actually coined by Gertrude Stein, but was made popular by Ernest Hemingway, who had used the line, “You are all a lost generation,” in an epigraph in his 1926 novel, The Sun Also Rises.
Jazz Music and Harlem
In the ’20s, the economy took an upswing and saw unprecedented growth, and some would say that the consumer culture bloomed during this jazz age at that time. Americans started to buy cars, appliances, and whatever money could buy. Technological advances rose at that time, as well. From the telephone and phonographs, the latter was beneficial for replaying music. Musicians, especially jazz players and music lovers, benefitted from the creation alike.
The Harlem Renaissance was the boom of art, music, poetry, and literature in the Black communities that resided in New York City’s neighborhood of Harlem. Names like Zora Neal Hurston, author of Their Eyes Were Watching God, Langston Hughes, and Countee Cullen were some of the most famous names associated with the Harlem Renaissance. Jazz music, loved by all, was heavily played by Black people and often performed at jazz clubs, a favorited scene; and the Cotton Club was one of the most happening spots for nightlife. | https://rare.us/rare-news/history/lost-generation/ |
- During the audio tour of exhibition, The Renaissance: Black Arts of the Twenties, narrator Robert Hall presents the evolution and achievements of black creative expression beginning in Harlem and spreading across the United States during th 1920s. Literary, visual, performance, and cinematic achievements are profiled. Including brief biographical histories and achievements by Marcus Garvey, James Weldon Johnson, Jessie Fauset, A. Philip Randolph, Claude McKay, Nella Larson, Carl Van Vechten, Countee Cullen, Alain Locke, Harry T. Burleigh, Paul Robeson, Roland Hayes, Lois Mailou Jones, Jules Bledsoe, Fletcher Henderson, Bessie Smith, and Mamie Smith.
- Self guided audio tour narration. Part of The Renaissance: Black Arts of the Twenties Audiovisual Records. AV001362: master. Undated.
- Date
- circa 1985
- Extent
- 1 Sound recording (open reel, 1/4 inch)
- 1 Sound recording (audio cassette)
- Type
- Archival materials
- Sound recordings
- Narration
- Occupation
- Artists
- Dramatists
- Topic
- African Americans
- African American women
- Harlem Renaissance
- African American authors
- African American women authors
- Authors
- African American poets
- Poets
- African American artists
- Sculpture
- Painting
- African Americans in the performing arts
- Musical theater
- African American musicians
- Musicians
- Spirituals (Songs)
- Jazz
- Blues (Music)
- Museum exhibits
- Place
- Harlem (New York, N.Y.)
- Anacostia (Washington, D.C.)
- Washington (D.C.)
- United States
- Culture
- African American
- Citation
- The Renaissance: Black Arts of the South Exhibit Tape, Exhibition Records AV03-024, Anacostia Community Museum Archives, Smithsonian Institution.
- Identifier
- ACMA.03-024, Item ACMA AV002682
- Local Numbers
- ACMA AV001362
- General
- Title transcribed from physical asset. | https://anacostia.si.edu/collection/archives/object/sova-acma-03-024-ref503 |
1959, The Year That Changed Jazz
The year 1959 was marked a monumental year in American music history. Many American jazz artists made recordings that influenced society profoundly and left a lasting impact that still is present to t...
A Concert Review of Dimensions in Jazz Directed by Wade Judy
Jazz Concert Assignment I decided to go to the “Dimensions in Jazz” concert which was directed by Wade Judy, Dr. Eric Bush, and Marko Marcinko. Before the performance started, there was a wide ran...
A Musical Performance By Students At Jazz Station In Eugene
Concert Report w/ Selfie On Friday night the 19th I was fortunate enough to attend a concert with multiple jazz combos made up of UO music students preformed at the Jazz Station here in Eugene. This...
Biography and Career Of Billie Holiday
Billie Holiday’s Path to become a Legendary Jazz Singer Billie Holiday was one of the most famous jazz singers of the 20th century. Billie Holiday’s innovative phrasing about her life experiences...
Charles Franklin 6th Hour Jazz Band Thelonious Monk Thelonious Monk was born on October 10, 1917 in Rocky Mount, North Carolina. When he was just four, his parents, Barbara and Thelonious, Sr., moved ...
Critical Analysis Of Thelonius Monk
To say songwriter Thelonius Monk was a talented jazz musician is an understatement to say the very least. The main focus of this essay is a critical analysis of one of the top jazz musicians ever, The...
Development Of Music During The Harlem Renaissance
The cultural shift that the United States experienced during the Harlem Renaissance affected the lives of everyday citizens. One factor that affected this cultural shift was the new, lively music you ...
Human bodies are geographical markers as they can span physical borders through their affection.
Textual, mnemonic, and physical gaps leave room in which identity is found through body and environment in Michael Ondaatje’s The English Patient and Toni Morrison’s Jazz. Ondaatje’s characters ...
Improvisation and other Jazz-like Techniques in Jack Kerouac’s Writing
Bop jazz divorced itself from its mainstream predecessor when musicians like Charlie Parker, Dizzy Gillespie, and Thelonious Monk began to emphasize fast tempo and improvisation over the predictable m...
Jimi Hendrix and Jazz-Rock Fusion
For rock legend Jimi Hendrix to have spoken these prophetic words may come as a surprise to some. He was at the height of his career; he was wanted by women worldwide; he was living a rags-to-riches f... | https://www.protechengineering-uk.com/essays/jazz-117 |
Journey to jazz in 1940s Harlem and D.C.
Use your imagination to experience the Harlem Renaissance as you visit the Cotton Club and Small's Paradise in Harlem and the Bohemian Caverns and the Lincoln Colonnade on U Street in D.C. Enjoy music in a night club setting as you listen to John Coltrane "walk the bar" with his saxophone, Ella Fitzgerald "scat," Billy Eckstine "croon and make the ladies swoon," Louis "Satchmo" Armstrong, Ella Fitzgerald, Duke Ellington, "Lady Day" Holiday and other jazz artists.
On May 20 at 7 p.m., enjoy a discussion of the jazz artists during the Harlem Renaissance. Share memories of your favorite jazz artist during the Harlem Renaissance or come to learn about these artists and their influence in America and in Europe. Also read the book Jazz A-B-Z by Wynton Marsalis.
Photos of the artists will be displayed with brief biographical information. Music by the artists will also be played.
Light refreshments will be served. | https://www.dclibrary.org/node/40892 |
Black History Month is an annual celebration of African Americans' achievements and a time for recognizing their central role in U.S. history. As we highlight Black Americans' contributions, we're going to reflect on some performances and events at Omaha Performing Arts. As we look back on influential artists who have shaped our community, this reflection is a brief list of creatives and their impact on theater, dance, blues/jazz and photography.
Theater
African American music, dance, arts, fashion, literature and theater took center stage in Harlem, New York, during the 1920s and 1930s. This era was known as the Harlem Renaissance and impacted Black culture worldwide. African American writers and artists began taking control of Black representation in music, theater, literature and visual arts. Some notable writers during the era include Zora Neale Hurston and Langston Hughes. Hughes's 1935 play "Mulatto” won wide acclaim, becoming a Broadway hit. Other artists who used their work to celebrate Black culture include actor Paul Robeson, jazz musician Duke Ellington, and dancer and singer Josephine Baker, to name a few
The Harlem Renaissance laid the groundwork for playwrights and authors like August Wilson. Wilson's "The Pittsburgh Cycle" is known as one of his greatest achievements. Also called the "Century Cycle," the piece is a series of ten plays that chronicle 20th century Black American life. The 1983 play "Fences" won a Pulitzer Prize and Tony® Award. Wilson's plays are performed across small and large stages across the U.S.
Actors from Omaha, Raydell Cordell III, Kathy Tyree, Tyrone Beasley, and his father, John Beasley, have all participated in a Wilson production. The four actors gathered during a panel discussion at O-pa in October 2020 to reflect on the play and movie adaptation of "Fences." Seeing the four actors discuss Wilson's work and their involvement in his productions is a great testament to the legacy and impact of Black artists. You can watch the discussion on O-pa's Facebook or YouTube page.
PHILADANCO
Dance
PHILADANCO is an extraordinary dance group celebrated for its innovation and creativity. Founded in 1970 by Joan Myers Brown, PHILADANCO was established to give Black ballet dancers performing opportunities that Brown said she did not have as a Black ballerina during her early career. PHILADANCO uses the language of dance to explore Black history, social justice and civil rights. Brown has received many accolades for her accomplishments, including the 2012 National Medal of the Arts and the distinguished 2019 Bessie Award for Lifetime Achievement in Dance, for her choreographic influence on Black dance in America. Her contributions have motivated and inspired generations of students, dancers, choreographers, directors, historians, and educators in the allied performing arts. PHILADANCO will perform at the Orpheum Theater on April 29 at 7:30 PM.
Rudy Smith
Photography
Rudy Smith was born in Philadelphia and shortly after birth, his mother moved the family to Omaha. In the 1960s, at 18 years old, he protested against the Omaha World Herald's employment practices and became the first Black employee in the newsroom. Smith worked for the Omaha World-Herald for 45 years. He was an award-winning photographer and chronicled the Black experience from education, family, sports, music, injustices and more. Throughout Smith's photography career, he worked as both the objective observer and the committed activist. During a time of Civil Rights turmoil and reform in America, Smith photographed historical subjects such as protests, marches, and riots. Smith passed away in December 2019. His daughter Q. Smith carries the torch as a Broadway actress and becoming the first Black lead character in Mary Poppins, per an Omaha Magazine interview. In December 2020, Voices AMPLIFIED!, a series on arts and social justice, hosted a panel discussion and exhibit of Rudy Smith's work. You can watch the panel discussion on Facebook or YouTube.
Blues/Jazz
Jazz originated from African American communities in the 20th century. The evolution of jazz was led by a series of great musicians like Duke Ellington, Louis Armstrong, Charlie Parker and Miles Davis. As the genre developed, jazz gradually emerged to a blend of ragtime and blues. Dozens of Black jazz and blues artists have performed on our stages at O-pa, but the late Rudy Smith captured the performance of one artist, B.B. King.
B.B. King, a singer and guitarist, began his music career at 22 years old. He hitchhiked to Memphis, TN, and became one of the best-known blues performers. His biggest hit single, "The Thrill is Gone," was released in 1969. According to Biography.com, King became the first bluesman to tour the Soviet Union in 1979 and the first bluesman to enter the pop mainstream. He's received 18 Grammy Awards throughout his career, including the Presidential Medal of Freedom and the Kennedy Center Honors. King performed at the Orpheum in 1995. He died in 2015. His contributions to music earned him the title "King of Blues."
This list is only a glimpse of the many contributions Black Americans have made to our society. If you're looking for ways to acknowledge and celebrate Black culture throughout the year, join us for Voices AMPLIFIED! It's a series on arts and social justice to amplify voices in diverse cultures. During the 2020/2021 season, Voices AMPLIFIED! has partnered with performing artists who reflect on Black history and racial equity to amplify Black voices, Black stories and encourage dialogue in the community. We have events coming up this week in February, March, April and a finale in June! Don't miss them. You can also watch previous recordings from panel discussions. Visit: Voices AMPLIFIED! | https://o-pa.org/about-us/blog/opa-blog/2021/02/04/celebrating-black-history-month |
-- 100+ page biographies of famous Americans. -- Great for reports and curriculum tie-ins.
In this authoritative volume, race and ethnicity are themselves considered as central organizing principles in why, how, where and by whom crimes are committed and enforced. The contributors argue that dimensions of race and ethnicity condition the very laws that make certain behaviors criminal, the perception of crime and those who are criminalized, the determination of who becomes a victim of crime under which circumstances, the responses to laws and crime that make some more likely to be defined as criminal, and the ways that individuals and communities are positioned and empowered to respond to crime. Contributors: Eric Baumer, Lydia Bean, Robert D. Crutchfield, Stacy De Coster, Kevin Drakulich, Jeffrey Fagan, John Hagan, Karen Heimer, Jan Holland, Diana Karafin, Lauren J.
Krivo, Charis E. Kubrin, Gary LaFree, Toya Z. Like, Ramiro Martinez, Jr., Ross L. Matsueda, Jody Miller, Amie L.
Nielsen, Robert O'Brien, Ruth D. Peterson, Alex R.
Piquero, Doris Marie Provine, Nancy Rodriguez, Wenona Rymond-Richmond, Robert J. Sampson, Carla Shedd, Elizabeth Trejos-Castillo, Avelardo Valdez, Alexander T. Vazsonyi, Maria B.
Velez, Geoff K. Ward, Valerie West, Vernetta Young, Marjorie S. Zatz.
Danny Jabo takes command of the USS Pittsburgh, and soon finds himself operating in some of the world's most dangerous waters. It's a journey across the globe, and back in time, as Danny must familiarize himself with diesel-powered submarines and the heroic World War II captains who commanded them.
THE CAPE COLONY, 19TH CENTURY SOUTH AFRICA. Jamie Fyvie and Iain McColl sail for Cape Town to wage war on the ELOS, a dark force which has taken control of the armed forces in the Cape Colony. The boys must crush a plot to wipe out the Xhosa clans of the Eastern Cape, the survivors to be sold into slavery in America. The voyage sees them battle with mutineers to save their own lives and two young ladies who will become part of their future in the Cape An ELOS executioner pursues Jamie and Iain, hell-bent on murdering them for the clues to a treasure buried by the Knights Templar five hundred years in the past. The Xhosa fight a bloody war against the British Army on the frontier, dying to win back their lands and freedom against a general who takes no prisoners.
This describes a woman's rise from living hand-to-mouth to pilot of United Airlines. She dreamed, worked very hard and never gave up.
One of the most challenging of lighting problems is explored, explained, and demonstrated in the striking nighttime photos that fill these dramatic pages.
Harlem has captivated the imagination of writers, artists, intellectuals, and politicians around the world since the early decades of this century. Rhapsodies in Black: Art of the Harlem Renaissance examines the cultural reawakening of Harlem in the 1920s and 1930s as a key moment in twentieth-century art history, one that transcended regional and racial boundaries. Published to coincide with the exhibition that opens in England and travels to the United States, this catalog reflects the Harlem Renaissance's impressive range of art forms—literature, music, dance, theater, painting, sculpture, photography, film, and graphic design. The participants included not only artists based in New York, but also those from other parts of the United States, the Caribbean, and Europe. Richard J. Powell and David A. Bailey present selected works that focus on six themes: Representing "The New Negro;" Another Modernism; Blues, Jazz, and the Performative Paradigm; The Cult of the Primitive; Africa: Inheritance and Seizure; and Jacob Lawrence's Toussaint L'Ouverture series. The visual arts from 1919 to 1938 included in the book suggest the extraordinary vibrancy of the time when Harlem was a metaphor for modernity. In spite of the importance of the Harlem Renaissance to early twentieth-century American culture and to the artistic climate of "Jazz Age" Paris and Weimar Berlin, few art exhibitions have been devoted exclusively to the subject. Rhapsodies in Black will be welcomed for its unique presentation of this creative time. | https://tedxchulalongkornu.com/538-thomas-nast-political-cartoonist |
This book offers a comprehensive and detailed guide to accomplishing and perfecting a photorealistic look in digital content across visual effects, architectural and product visualization, and games. Emmy award-winning VFX supervisor Eran Dinur offers readers a deeper understanding of the complex interplay of light, surfaces, atmospherics, and optical effects, and then discusses techniques to achieve this complexity in the digital realm, covering both 3D and 2D methodologies.
In addition, the book features artwork, case studies, and interviews with leading artists in the fields of VFX, visualization, and games.
Exploring color, integration, light and surface behaviour, atmospherics, shading, texturing, physically-based rendering, procedural modelling, compositing, matte painting, lens/camera effects, and much more, Dinur offers a compelling, elegant guide to achieving photorealism in digital media and creating imagery that is seamless from real footage. Its broad perspective makes this detailed guide suitable for VFX, visualization and game artists and students, as well as directors, architects, designers, and anyone who strives to achieve convincing, believable visuals in digital media. | https://www.speedyhen.com/Product/Eran-Brainstorm-Digital-USA-Dinur/The-Complete-Guide-to-Photorealism-for-Visual-Effects-Vis/26676526 |
22 questions linked to/from How to mockup a logo in a realistic environment?
35
votes
4answers
36k views
How to achieve this 3D “Card” effect
How do I create this type of effect with an image? I'm wondering how to get the thick edges and the tilted look. Is there an online tool that will convert a flat image to look like this? Source
10
votes
1answer
12k views
How can I create realistic business card mockups?
I'm trying to produce the effect in the below image: I'd preferably like to do this with Photoshop or Illustrator. How can this be accomplished?
7
votes
3answers
3k views
How do designers render their work onto an iPhone/Android?
Before you down vote, bare with me as I'm a developer! I've always been curious how designers render their work onto a phone to show their customers how it may look on the phone, for example: (Source:...
5
votes
3answers
5k views
Creating 3D book cover mockups from scratch
You very often see 3D mockups of books that look realistic enough to pass for a photo of the book at a glance. They’re a great way to make a flat front page come to life, and it’s no wonder they’re ...
3
votes
2answers
4k views
How to create 3D presentational mockup templates to showcase logo design to a client? [duplicate]
I would like to know how to create a 3D presentational mockup template that utilizes smart objects to change dynamically for example logo design. I would like to know also tricks on how to add shadows ...
2
votes
1answer
4k views
Creating mockup template using inkscape for logo design presentation
How do I create mockup templates for Logo design using Inkscape? I don't have Photoshop where mockup template can be created very easily. I have created a logo and I want to use this mock up as a ...
2
votes
2answers
3k views
How to do a prototype print of a business card
I'm a newbie in the freelancer world. I started creating logos and business cards and I show it to my customers. Well, I always face a problem how to present my work in a professional and a nice way. ...
2
votes
1answer
27 views
How do I make a mockup PSD file from a pair of 3D renders?
I'm not sure if this falls more into the domain of graphic design or 3DCG so I figured I'll just ask the experts at image manipulations first. Basically, I made a 3D scene in Blender of a paper bag so ...
1
vote
1answer
368 views
Creating 3d mock-up texture
I downloaded a 3d poster mockup from this website The zip file contains 001.psd which allows you to put your poster in it and then creates a sort of wrinkled, 3d-ish texture to your artwork. I ...
1
vote
2answers
7k views
How to create perspective like this in Photoshop?
I really want to start creating professional mockups for my branding projects and for my portfolio but don't know how to create perspective like this.
1
vote
1answer
2k views
How to place text in perspective on product mockup
Hi i want to write a letter W on the bag, as you can see it has a kind of perspective to it, so the letter should follow this perspective. I am using Adobe Illustrator.
1
vote
1answer
405 views
How can you achieve realistic closeup perspective in Photoshop using 3D Postcard feature?
I'm trying to achieve this look in Photoshop 3D, but I am struggling to get the perspective to look realistic. I create 3D Postcards from screenshots of software I am trying to showcase. I have ...
1
vote
1answer
3k views
How to place UI gif in iPhone perspective mock for use on sites like Dribbble [closed]
I've seen many tutorials about how to convert a .mov file to a gif in Photoshop (Convert Video Frames to Layers, etc.), but none about how to place that gif in an iPhone perspective mock to be shown ...
0
votes
1answer
6k views
How to create 3d text or photo based logo [duplicate]
I am completely new to Photoshop and now I am able to create simple logo based on images available or creating simple 3D text. I am fascinated by these logo design & I am not able to find any ...
0
votes
4answers
2k views
How do I create a mock-up photorealistic book and graphics and what free tool could be used? [closed]
Can you create illustrations (see below) with indesign? I would like to create this content and offer PDF brochures to visitors and make it look much better. However, I am not sure if this is ... | https://graphicdesign.stackexchange.com/questions/linked/113783?sort=votes |
Shaded relief, or hill-shading, used in Cartographic relief depiction, shows the shape of the terrain in a realistic fashion by showing how the three-dimensional surface would be illuminated from a point light source.
Contents
Design Guidlines
There are a number of factors worth taking into consideration before designing a shaded relief. Two such ideas are scale and generalization. At a small scale, it will be necessary to exaggerate the relief to obtain the desired effects. One must also consider alternative forms of displaying their shaded relief, such as from an aerial perspective or in a plan oblique relief. These both provide the viewer with a more realistic angle and point of view of the depicted area.
A hillshade map is calculated from an elevation dataset using an algorithm that determines the amount of shadow to apply to each raster cell depending on the elevation value of that cell and the location of the light source- usually the northwest corner of the map. Slope and aspect values are also taken into account. The great thing about a hillshade map is that when you put it under some transparent layers, you get a sense of where those layers' features are in relation to the topography of an area.
In most shaded relief maps, the light source is in the Northwest. Many people have pointed out that this is unrealistic for maps of the northern hemisphere, because the sun doesn't shine from that direction, and they have proposed using southern lighting. However, the normal convention is followed to avoid multistable perception illusions (i.e. crater/hill confusion).
History
Traditionally shaded reliefs were drawn with charcoal, airbrush and other artist's media. Swiss cartographer Eduard Imhof is widely regarded as the master of manual hill-shading technique and theory. Imhof's contributions included a multi-color approach to shading, with purples in valleys and yellows on peaks. Shaded relief today is almost exclusively computer-generated using digital elevation models (DEM), with a resulting different look and feel. This model is used to show elevation with the use of a grid system of many small rectangular shapes that give the form of a three-dimensional virtual landscape. The light-dark pattern is used as a 3D illusion as west-facing slopes are brighter to show the luminosity of the sun and east-facing slopes are shaded to show the lack of sun. The use of illumination and shadow to produce an appearance of three-dimensional space on a flat-surfaced map closely parallels the painting technique known as chiaroscuro. The DEM may be converted to shaded relief using software such as Adobe Photoshop or ArcMap's Spatial Analyst extension.
Resolution Bumping
Resolution Bumping is a hybrid technique, developed by Tom Patterson, a cartographer for the U.S. National Park Service, for improving the appearance of shaded relief on small-scale maps. The analytical hillshading algorithms in GIS tend to produce images that emphasize ruggedness (areas with a high frequency of changing illumination) rather than prominence (the relative height of locations). Thus, an area of rolling hills may stand out more than a very high mountain that rises relatively smoothly (e.g., a shield volcano).
To counteract this, resolution bumping blends a high-resolution DEM with a lower-resolution DEM (either in an image editing program or in raster GIS using a weighted average). When the weights of the two inputs are properly balanced, the resultant shaded relief shows both small details and larger trends. According to Patterson, "Resolution bumping in effect 'bumps' or etches a suggestion of topographical detail onto generalized topographic surfaces" .
Illuminated Relief
Shaded relief will normally have the background color of the terrain on the slopes facing the light source. This is easier to draw by hand and is normally what GIS software will produce when shaded relief is generated. Greater realism, however, can be produced if the slopes facing the light source, are actually illuminated, or in other words, are given a brighter color than the background color of the terrain layer. This greater realism enhances the sense of the three-dimensional nature of the terrain (which may also eliminate the need for hypsometric tinting). It will also allow for the terrain to be drawn with lighter shadows, which can be useful for map design concerns. The illumination can be portrayed in a complementary color to the terrain background color, which turns the map into a more aesthetically pleasing and artistic-looking map. While illuminated shaded relief was difficult to produce by hand and with early computer tools, it is now relatively easy to produce by editing images produced by a GIS with Photoshop or similar software.
Sky Model Shading
While a normal hillshade effect is produced by using a single light source that is placed an infinite distance away from the map, sky models use multiple light sources to mimic a real-life sky surrounding the map. In more technical terms, Kenneth Field of ESRI describes them as follows, "[Sky models] build multiple hillshades with varying azimuth, zenith and intensity of light source and combine them in a weighted output to create hillshades under different lighting conditions with some dramatic effects". This type of shading helps the map reader to determine relative heights of objects. There are multiple sky models; some give the effect of a clear sky and others that of a cloudy sky. The International Commission on Illumination (CIE) standardizes these models.
References
- ↑ http://www.reliefshading.com/design/
- ↑ Peterson, Gretchen N. GIS Cartography: a Guide to Effective Map Design. CRC Press, 2015.
- ↑ E. Imhof, Cartographic Relief Presentation, Walter de Gruyter, 1982, reissued by ESRI Press, 2007, ISBN 978-1-58948-026-1, pp. 178-185.
- ↑ Shaded Relief Maps http://reynolds.asu.edu/azvt/shaded_info.htm
- ↑ Patterson, Tom., "Resolution bumping GTOPO30 in Photoshop: How to Make High-Mountains More Legible," Accessed 24 September 2012.
- ↑ Tom Patterson, "See the light: How to make illuminated shaded relief in Photoshop 6.0," http://www.shadedrelief.com/illumination/ (accessed 30 October 2017). | http://wiki.gis.com/wiki/index.php/Shaded_Relief |
Shaded relief, or hill-shading, used in Cartographic relief depiction, shows the shape of the terrain in a realistic fashion by showing how the three-dimensional surface would be illuminated from a point light source.
Contents
Design Guidlines
There are a number of factors worth taking into consideration before designing a shaded relief. Two such ideas are scale and generalization. At a small scale, it will be necessary to exaggerate the relief to obtain the desired effects. One must also consider alternative forms of displaying their shaded relief, such as from an aerial perspective or in a plan oblique relief. These both provide the viewer with a more realistic angle and point of view of the depicted area.
A hillshade map is calculated from an elevation dataset using an algorithm that determines the amount of shadow to apply to each raster cell depending on the elevation value of that cell and the location of the light source- usually the northwest corner of the map. Slope and aspect values are also taken into account. The great thing about a hillshade map is that when you put it under some transparent layers, you get a sense of where those layers' features are in relation to the topography of an area.
In most shaded relief maps, the light source is in the Northwest. Many people have pointed out that this is unrealistic for maps of the northern hemisphere, because the sun doesn't shine from that direction, and they have proposed using southern lighting. However, the normal convention is followed to avoid multistable perception illusions (i.e. crater/hill confusion).
History
Traditionally shaded reliefs were drawn with charcoal, airbrush and other artist's media. Swiss cartographer Eduard Imhof is widely regarded as the master of manual hill-shading technique and theory. Imhof's contributions included a multi-color approach to shading, with purples in valleys and yellows on peaks. Shaded relief today is almost exclusively computer-generated using digital elevation models (DEM), with a resulting different look and feel. This model is used to show elevation with the use of a grid system of many small rectangular shapes that give the form of a three-dimensional virtual landscape. The light-dark pattern is used as a 3D illusion as west-facing slopes are brighter to show the luminosity of the sun and east-facing slopes are shaded to show the lack of sun. The use of illumination and shadow to produce an appearance of three-dimensional space on a flat-surfaced map closely parallels the painting technique known as chiaroscuro. The DEM may be converted to shaded relief using software such as Adobe Photoshop or ArcMap's Spatial Analyst extension.
Resolution Bumping
Resolution Bumping is a hybrid technique, developed by Tom Patterson, a cartographer for the U.S. National Park Service, for improving the appearance of shaded relief on small-scale maps. The analytical hillshading algorithms in GIS tend to produce images that emphasize ruggedness (areas with a high frequency of changing illumination) rather than prominence (the relative height of locations). Thus, an area of rolling hills may stand out more than a very high mountain that rises relatively smoothly (e.g., a shield volcano).
To counteract this, resolution bumping blends a high-resolution DEM with a lower-resolution DEM (either in an image editing program or in raster GIS using a weighted average). When the weights of the two inputs are properly balanced, the resultant shaded relief shows both small details and larger trends. According to Patterson, "Resolution bumping in effect 'bumps' or etches a suggestion of topographical detail onto generalized topographic surfaces" .
Illuminated Relief
Shaded relief will normally have the background color of the terrain on the slopes facing the light source. This is easier to draw by hand and is normally what GIS software will produce when shaded relief is generated. Greater realism, however, can be produced if the slopes facing the light source, are actually illuminated, or in other words, are given a brighter color than the background color of the terrain layer. This greater realism enhances the sense of the three-dimensional nature of the terrain (which may also eliminate the need for hypsometric tinting). It will also allow for the terrain to be drawn with lighter shadows, which can be useful for map design concerns. The illumination can be portrayed in a complementary color to the terrain background color, which turns the map into a more aesthetically pleasing and artistic-looking map. While illuminated shaded relief was difficult to produce by hand and with early computer tools, it is now relatively easy to produce by editing images produced by a GIS with Photoshop or similar software.
Sky Model Shading
While a normal hillshade effect is produced by using a single light source that is placed an infinite distance away from the map, sky models use multiple light sources to mimic a real-life sky surrounding the map. In more technical terms, Kenneth Field of ESRI describes them as follows, "[Sky models] build multiple hillshades with varying azimuth, zenith and intensity of light source and combine them in a weighted output to create hillshades under different lighting conditions with some dramatic effects". This type of shading helps the map reader to determine relative heights of objects. There are multiple sky models; some give the effect of a clear sky and others that of a cloudy sky. The International Commission on Illumination (CIE) standardizes these models.
References
- ↑ http://www.reliefshading.com/design/
- ↑ Peterson, Gretchen N. GIS Cartography: a Guide to Effective Map Design. CRC Press, 2015.
- ↑ E. Imhof, Cartographic Relief Presentation, Walter de Gruyter, 1982, reissued by ESRI Press, 2007, ISBN 978-1-58948-026-1, pp. 178-185.
- ↑ Shaded Relief Maps http://reynolds.asu.edu/azvt/shaded_info.htm
- ↑ Patterson, Tom., "Resolution bumping GTOPO30 in Photoshop: How to Make High-Mountains More Legible," Accessed 24 September 2012.
- ↑ Tom Patterson, "See the light: How to make illuminated shaded relief in Photoshop 6.0," http://www.shadedrelief.com/illumination/ (accessed 30 October 2017). | http://wiki.gis.com/wiki/index.php/Shaded_Relief |
In this tutorial we are going to make a caution street sign using basic shapes. We will use the Mesh tool and some blend
modes to give a realistic shading and 3D effects.
1 Create a basic shape
Go to File>New and open a new RGB document. Choose the Rounded Rectangle tool from the Rectangle tool list. Click
once on the artboard and a dialog that appears, enter 500 pt for the Width and Height, and set the Corner Radius to 20 pt.
Click OK. Since this is a typical caution sign, it needs to be yellowish-orange. Open the Swatches panel
(Window>Swatches) and choose the preset yellow colour.
2 Copy and paste the shape
Using the Selection tool (V) pick the shape and position the cursor outside one of the corners until it changes to a curved
double arrow. Hold down the Shift key and rotate until it snaps into position. Now we have our basic sign shape and
colour. Press Command-C (PC: Ctrl-C) to copy this shape to the clipboard and then choose Edit>Paste in Front to place
the object directly in front of the original.
3 Add stroke and lock the selection
Go to the Control panel, set the Fill for the shape to None, Stroke to black, and Stroke Weight to 5 pt. In the Toolbox,
double-click on the Scale tool (S) to open the Scale dialog box. In the Uniform section, set the Scale to 90% and click
OK. Choose both the objects with the Selection tool and go to Object>Lock>Selection.
4 Add gradient mesh
Go to Edit>Paste in Front. Then open the Swatches panel, and choose a neutral 50% grey. We will use this layer to add
shading to the graphic. Move on to the Toolbox, choose the Mesh tool (U), and click inside the shape. Now choose white
colour from the Swatches panel. When you apply it the white colour fades out from the anchor point to the base grey
colour of the shape.
5 Adding more anchor points
Click in a different area of the shape to add a new anchor point. Now click directly on an existing gridline so that the
gridlines share anchor points. Pick a darker grey or black from the Swatches panel to change the colour in that part of the
mesh. You can add more points and adjust the shading around those points. You can also change the colour settings of the
existing points, if needed.
6 Blend the objects
Open the Transparency panel (Window>Transparency). Choose Select>All, then click on Normal to open the blend mode
drop-down menu, and choose Overlay.
7 Adding mounting holes
Remove the selection. In the Control panel set the Fill colour to black. Choose the Ellipse tool (L), hold the Shift key,
and drag out a small circle on your sign. Go to Edit>Copy then Edit>Paste. Choose the Selection tool and position the
holes on the sign. Open the Align panel (Window>Align), select both circles, and click the Horizontal Align Centre icon.
Now add some text or graphics to the sign. Make sure the Fill colour is black and position them in the frame.
8 Making the sign post
Start with the Rectangle tool. Draw a tall, narrow rectangle. Fill this shape with green colour. Open the Colour panel
(Window>Colour) and enter the values manually.
9 Apply the mesh
Select the objects and go to Object>Create Gradient Mesh. In the dialog box, set the number of Rows to 1 and Columns
to 3. Leave the appearance to Flat and click OK.
10 Shading
Using the Direct Selection tool (A), locate the third grid line from the left and click-and-drag around the entire length of
the line to select the top and bottom anchor points of the line, as indicated by a solid blue colour. Go to the Colour panel
and set these anchor points to R: 146, G: 255, and B: 146. Select the grid line to the immediate left using the Direct
Selection tool, and set the colour of these anchor points to a darker green using the Swatches panel. Navigate
Object>Arrange>Send to Back and then use the Selection tool to position the post behind the sign.
11 Add background
With the Rectangle tool draw a square shape and fill it with a solid blue colour. Go to Object>Create Gradient Mesh. Set
the number of Rows and Columns to 2 each. Set the highlight Appearance setting to Centre and Highlight to 75%. Click
OK. Position this background graphic over the sign graphic then navigate to Object>Arrange>Send to Back.
12 Rotate the sign
For add designs to the sign, go to Object>Unlock All. Then Shift-click on the target icons to include all the layers that
make up the sign graphic. Now select Object>Group, then go to Effect>3D>Rotate. In the Rotate 3D Rotate Options
dialog box, set the Perspective to 75°. You can rotate the object in 3D by clicking on the cube and repositioning it
manually, or you can input the settings as shown below. Click OK. This puts the object at a more realistic angle. | http://www.maaillustrations.com/blog/illustrator-tutorials/creating-a-sign-post-using-the-mesh-tool-and-3d-effects/ |
Flat shading is lighting technique used in 3D computer graphics. It shades each polygon of an object based on the angle between the polygon's surface normal and the direction of the light source, their respective colors and the intensity of the light source. It is usually used for high speed rendering where more advanced shading techniques are too computationally expensive.
The disadvantage of flat shading is that it gives low-polygon models a facetted look. Sometimes this look can be advantageous though, such as in modelling boxy objects. Artists sometimes use flat shading to look at the polygons of a solid model they are creating. More advanced and realistic lighting and shading techniques include Gouraud shading and Phong shading.
See also: computer graphics
This article is a stub. You can help Wikipedia by fixing it. | https://infomutt.com/f/fl/flat_shading.html |
The OpenGL® Programming Guide, Fifth Edition, provides definitive and comprehensive information on OpenGL and the OpenGL Utility Library. The previous edition covered OpenGL through Version 1.4. This fifth edition of the best-selling "red book" describes the latest features of OpenGL Versions 1.5 and 2.0, including the introduction of the OpenGL Shading Language.
You will find clear explanations of OpenGL functionality and many basic computer graphics techniques, such as building and rendering 3D models; interactively viewing objects from different perspective points; and using shading, lighting, and texturing effects for greater realism. In addition, this book provides in-depth coverage of advanced techniques, including texture mapping, antialiasing, fog and atmospheric effects, NURBS, image processing, and more. The text also explores other key topics such as enhancing performance, OpenGL extensions, and cross-platform techniques.
This fifth edition has been extensively updated to include the newest features of OpenGL Versions 1.5 and 2.0, including:
Most importantly, this edition discusses the OpenGL Shading Language (GLSL) and explains the mechanics of using this new language to create complex graphics effects and boost the computational power of OpenGL.
Dave Shreiner, a leading OpenGL consultant, was a longtime member of the core OpenGL team at SGI. He authored the first commercial OpenGL training course, and has been developing computer graphics applications for more than two decades.
The OpenGL graphics system is a software interface to graphics hardware. (The GL stands for Graphics Library.) It allows you to create interactive programs that produce color images of moving three-dimensional objects. With OpenGL, you can control computer-graphics technology to produce realistic pictures or ones that depart from reality in imaginative ways. This guide explains how to program with the OpenGL graphics system to deliver the visual effect you want.
This guide has 15 chapters. The first five chapters present basic information that you need to understand to be able to draw a properly colored and lit three-dimensional object on the screen.
The remaining chapters explain how to optimize or add sophisticated features to your three-dimensional scene. You might choose not to take advantage of many of these features until you’re more comfortable with OpenGL. Particularly advanced topics are noted in the text where they occur.
In addition, there are several appendices that you will likely find useful:
Finally, an extensive Glossary defines the key terms used in this guide.
The fifth edition of the OpenGL Programming Guide includes new and updated material, covering both OpenGL Versions 1.5 and 2.0:
This guide assumes only that you know how to program in the C language and that you have some background in mathematics (geometry, trigonometry, linear algebra, calculus, and differential geometry). Even if you have little or no experience with computer graphics technology, you should be able to follow most of the discussions in this book. Of course, computer graphics is a huge subject, so you may want to enrich your learning experience with supplemental reading:
Another great place for all sorts of general information is the Official OpenGL Web Site. This Web site contains software, documentation, FAQs, and news. It is always a good place to start any search for answers to your OpenGL questions:
http://www.opengl.org/
Once you begin programming with OpenGL, you might want to obtain the OpenGL Reference Manual by the OpenGL Architecture Review Board (also published by Addison-Wesley), which is designed as a companion volume to this guide. The Reference Manual provides a technical view of how OpenGL operates on data that describes a geometric object or an image to produce an image on the screen. It also contains full descriptions of each set of related OpenGL commands—the parameters used by the commands, the default values for those parameters, and what the commands accomplish. Many OpenGL implementations have this same material online, in the form of manual pages or other help documents, which are probably more up-to-date. Ther...
A collection of interviews that probe the minds of 20 of the most notable programmers. They highlight the forces, events and the personality traits that influenced today's software movers and shakers - how they approach design, is it a talent?, and how they see the future. Among the interviewees are Andy Hertzfield (Macintosh Operating...
Get to grips with AX 2012 and learn a whole host of tips and tricks to ensure project success
Overview
Application Servers for E-Business presents a comprehensive overview of the technologies related to application servers in their facilitation of E-business. These technologies includes CORBA, Java, Enterprise Java Beans, Java 2, web servers, and legacy systems. It explores the role these servers play in the modern enterprise IT infrastructure...
CD-ROM WITH WORKING CODE EXAMPLES, VERIFICATION TOOLS AND MORE
No matter what your current level of expertise, nothing will have you writing and verifying concise, efficient VHDL descriptions of...
This book is a comprehensive and authoritative guide to voice user interface (VUI) design. The VUI is perhaps the most critical factor in the success of any automated speech recognition (ASR) system, determining whether the user experience will be satisfying or frustrating, or even whether the customer will remain one. This book describes a... | https://book.pdfchm.net/opengl-r-programming-guide-the-official-guide-to-learning-opengl-r-version-2-5th-edition/9780321335739/ |
Oct 25 2009 By Paul Andrew Which one is better for manipulating text, Photoshop or Illustrator? There is no clear and defined answer.
Adobe Illustrator Shortcuts
If you liked my previous article on Photoshop shortcuts, you’ll probably find this post useful. Here are 26 Illustrator shortcuts that can help you to speed up productivity. I use most of them (in fact, I can’t work without them). Most of shortcuts listed in this article aren’t documented in the software, so keep reading and you’re sure to find at least one new trick to put up your sleeve. Enjoy! Note: this article is written in Mac Illustrator CS3 format.
How To Draw A Glossy 2.0 Loading Bar Vector In Illustrator « A Love For Design » Web Design & Graphic Design Tutorials
In this tutorial we’re going to show you how to draw a glossy slick looking loading bar in illustrator. The benefits of doing a project like this in illustrator is that it uses vectorization. By vectorizing this drawing we can resize it to what ever width and height we want without losing the quality of the image (pixelated). You can also transfer our vector from illustrator over to photoshop for easy editing. Creative Commons – Some Rights Reserved
Beautiful Fluffy Clouds in Photoshop - Christmas Tutorial
A few days ago I was watching a movie and I saw the Dreamworks logo. It's very well known and incredibly beautiful; that little kid sitting on the moon fishing. But what caught my attention was the clouds on the logo. They were so dramatic, and I thought that it would be a nice idea try to replicate the effect in Adobe Photoshop CS4. The first thing I did was search on the web for references, and to see if there was already a tutorial showing how to accomplish the effect.
22 Very Useful Adobe Illustrator Tutorials
351 shares 30 Flat Circular Vector Icons It can be difficult to find exactly the right type of free icons online – but this iconset is sure to fill the gap. This freebie pack includes 30 flat vector icons created with Adobe Illustrator. Each flat icon is inside a circular background with a drop shadow effect. This is a very popular technique…
80 Best-Of Adobe Illustrator Tutorials, Brushes, .EPSs and Resources - Noupe Design Blog
Jan 18 2009 For months, we have been bookmarking interesting, useful and creative Adobe Illustrator tutorials and Resources, so you can now rest assured that you will have the necessary tools to get the job done. Due to this phenomenally vast amount of vector packs, brushes, patterns available, you can now add dirt, rust, floral effect, swirls, mold, oil stains in your artwork and to give it any look you want.
Drawing Day 2: Shading Techniques « LearningNerd
Today I read about some basic shading techniques, so here’s what I learned, all wrapped up in a three-minute video. Enjoy! I learned about these shading techniques from a bunch of sites, but these were the most helpful: Pen and Ink Lessons – Line and Value – Explanations and examples of several shading techniques.Basic Pen Strokes for Ink Drawing – A quick overview of several shading techniques.Cross-hatching – Examples of cross-hatching.Stipple Portraits – Examples of stippling.Basic Contour Hatching – An introduction to using curved hatching lines to create the illusion of depth.
Adobe Illustrator Tutorials, Illustrator CS4 Tutorials, Vector Graphics Software Programs, Articles, Vector Images, Illustrator Brushes, Symbols, Web Graphics, Plug-ins, Plugins, Filters, CS3 - Web Site Resources, Website Tips, Websitetips.com
Adobe Illustrator Tutorials, Articles, Books, Software The Adobe Illustrator section provides annotated links to helpful, top quality, reliable Illustrator tutorials, vector graphics tutorials, tips, and more for Illustrator CS4, CS3, and more. Illustrator tutorials are for all levels, from newbies to advanced Illustrator users and cover how to use features within Illustrator (such as the Text Tool, Live Trace, the Pen Tool, Brushes, Mesh Tool), and how to create various effects with Illustrator, such as swirls, curls, swooshes, ribbons, and how to create vector graphic icons or navigation buttons and tabs, how to integrate Photoshop CS4 and Illustrator CS4 using Vector Smart Objects, and much more. You'll also find Adobe Illustrator brushes, symbols, vector images, Illustrator product information and help, discussion lists, online forums on Adobe Illustrator,
18 Insanely Addictive Font Games
Typography helps you engage your audience and establish a distinct, unique personality on your website. Knowing how to use fonts to build character in your design is a powerful skill, and exploring the history and use of typefaces, as well as typographic theory, can help. But it doesn't have to be boring. This selection of online and mobile font games will help test and expand both your knowledge and identification skills.
Adobe Illustrator Tutorials – Best Of - Smashing Magazine
Advertisement Over years Adobe Illustrator has become the standard application when it comes to illustration design. Artists, illustrators and graphic designers use Illustrator to create vector-based graphics which — contrary to raster-based editors such as Adobe Photoshop — can be easily rescaled without the loss of quality. E.g., Illustrator is often used to quickly transform hand-drawn sketches on a sheet of paper into lively and colorful digital images. However, to master Adobe Illustrator isn’t easy; and the creating process of professional illustrations requires both time and patience.
30 Fresh & Useful Adobe Illustrator Tutorials & Neat Tips - Noupe Design Blog
Apr 01 2009 Adobe Illustrator is a powerful tool for illustrating various elements one can use for web pages and print design. However, it’s important to know what to do in order to powerfully use its tools and achieve certain effects. So, step-by-step tutorials can provide a lot of help, thats why we spent a lot of time searching for Fresh and New high quality tutorials out there and the result was 30 remarkable illustrator tutorials and tips. Let’s take a look at some of the best and Fresh Adobe Illustrator tutorials we’ve found in the Web so far. Gradient Strokes
thanks, glad you like it. if you want you may connect, than you know when they are new links by phil.graphics Mar 24 | http://www.pearltrees.com/u/804877-excellent-illustrator |
Category:
In painting, drawing, etc., to create a three-dimensional image of on a flat surface through the use of color, shading, etc.
The definition of an image is a representation of something or someone or a photograph or an idea you're picturing in your head or the way you or others think of you.
plaster-of-paris
A form of calcium phosphate derived from gypsum. It is mixed with water to make casts and molds.
plaster bandage
bandage consisting of a firm covering (often made of plaster of Paris) that immobilizes broken bones while they heal
Advertisement
Find another word for plaster-cast. In this page you can discover 5 synonyms, antonyms, idiomatic expressions, and related words for plaster-cast, like: cast, model, image, plaster-of-paris and plaster bandage. | https://thesaurus.yourdictionary.com/plaster-cast |
This paper generalizes Crow's procedure for computing shadow volumes caused by the end points of the linear source results in an easy determination of the reions of penumbrae and umbrae on the face prior to shading calculation.
A virtual light field approach to global illumination
- MathematicsProceedings Computer Graphics International, 2004.
- 2004
An algorithm that provides real-time walkthrough for globally illuminated scenes that contain mixtures of ideal diffuse and specular surfaces is described, offering a global illumination solution for real- time walkthrough even on a single processor.
A Rendering Pipeline for Street Lighting Simulation
- Computer Science
- 1994
This paper presents an application in the field of illumination and traffic engineering with the definition of its rendering pipeline and introduces an extension of the radiosity method to incorporate physical surface light sources with intensity distribution curves and a set of image transformation for the correct display of the results.
Illumination and Reflection Maps : Simulated Objects in Simulated and Real Environments Gene
- Physics
- 1984
Blinn and Newell introduced reflection maps for computer simulated mirror highlights. This paper extends their method to cover a wider class of reflectance models. Panoramic images of real, painted…
Parameterized Ray-tracing
- Computer ScienceSIGGRAPH '89
- 1989
A new technique is introduced to speed up the generation of successive ray traced images when the geometry of the scene remains constant and only the light source intensities and the surface properties need to be adjusted.
Efficient object-based hierarchical radiosity methods
- Computer ScienceAusgezeichnete Informatikdissertationen
- 2000
It is shown how the clustering technique can be improved without loss in image quality by applying the same data-structure for both, the visibility computations and the efficient radiosity simulation.
A Light Hierarchy for Fast Rendering of Scenes with Many Lights
- Computer ScienceComput. Graph. Forum
- 1998
A new data structure in the form of a light hierarchy for efficiently ray‐tracing scenes with many light sources, where an octree is constructed with the point light sources in a scene by means of a virtual light source.
Hierarchical view-dependent structures for interactive scene manipulation
- Computer ScienceSIGGRAPH
- 1996
This paper presents a system that efficiently detects and recomputes the exact portion of the image that has changed after an arbitrary manipulation of a scene viewed from a fixed camera.
Vertex shading of the three-dimensional model based on ray-tracing algorithm
- Computer ScienceSPIE/COS Photonics Asia
- 2016
A novel ray tracing algorithm is presented to color and render vertices of the 3D model directly to improve the rendering efficiency and the rendering time is independent of the screen resolution.
Interactive reflections on curved objects
- Computer ScienceSIGGRAPH
- 1998
This paper presents a novel method for interactive computation of reflections on curved objects, which can be reduced to a 2-D one that is utilized more accurately and efficiently.
References
SHOWING 1-10 OF 21 REFERENCES
Illuminat~on for computer generated images
- Mathematics
- 1973
A new model for the shading of computer- generated images of objects in general and of polygonally described free-form curved surfaces in particular is described, which takes into consideration the physical properties of the materials of which the surfaces are made.
Casting curved shadows on curved surfaces
- Computer ScienceSIGGRAPH '78
- 1978
A simple algorithm is described which utilizes Z-buffer visible surface computation to display shadows cast by objects modelled of smooth surface patches, and is contrasted with a less costly method for casting the shadows of the environment on a single ground plane.
Illumination for computer generated pictures
- Computer ScienceCommun. ACM
- 1975
Human visual perception and the fundamental laws of optics are considered in the development of a shading rule that provides better quality and increased realism in generated images.
Transparency for computer synthesized images
- PhysicsSIGGRAPH '79
- 1979
If a few assumptions are made about the geometry of each object and about the conditions under which they are viewed, a much simplier algorithm can be used to approximate the refractive effect.
3-D Visual simulation
- Physics
- 1971
This paper describes a visual simulation technique by which fully computer-generated perspective views of three-dimensional objects may be produced. The method is based on a relatively simple…
Models of light reflection for computer synthesized pictures
- PhysicsSIGGRAPH '77
- 1977
A more accurate function for the generation of hilights which is based on some experimental measurements of how light reflects from real surfaces is presented, which differs from previous models in that the intensity of the hilight changes with the direction of the light source.
Some techniques for shading machine renderings of solids
- ArtAFIPS '68 (Spring)
- 1968
If techniques for the automatic determination of chiaroscuro with good resolution should prove to be competitive with line drawings, and this is a possibility, machine generated photographs might replace line drawings as the principal mode of graphical communication in engineering and architecture.
A subdivision algorithm for computer display of curved surfaces.
- Computer Science
- 1974
A method for producing computer shaded pictures of curved surfaces using three-dimensional curved patches, which can be 'mapped' onto patches thus providing a means for putting texture on computer-generated pictures.
Hierarchical geometric models for visible-surface algorithms
- Computer ScienceSIGGRAPH '76
- 1976
The thesis of the research is that the geometric structure inherent in the definition of the shapes of three-dimensional objects and environments must be used not just to define their relative motion and placement but also to assist in solving many other problems of systems for producing pictures by computer.
Simulation of wrinkled surfaces
- Computer ScienceSIGGRAPH '78
- 1978
A method of using a texturing function to perform a small perturbation on the direction of the surface normal before using it in the intensity calculations yields images with realistic looking surface wrinkles without the need to model each wrinkle as a separate surface element. | https://www.semanticscholar.org/paper/An-improved-illumination-model-for-shaded-display-Whitted/752edecfa8560a39b34b6e64fba977d4d25ce890 |
Three shades and two uses (cel shading/toon shading) in characters is an art style of non-realistic rendering. This technique creates a flat color on top of the basic color of a 3D object, making the object appear to have a 3D perspective while maintaining a 2D effect. Simply put, the 3D model is modeled in 3D and then rendered into a 2D color block effect.
2D rendering of 3D characters is a common technique in 2D games. The 3D character is first modeled by 3D technology, rendered into a 2D picture, and then the 2D picture is loaded into the game, making the 2D game present a realistic 3D effect.
Therefore, Three shades and two uses is essentially a 2D game, but the process (character model and scene model production) utilizes 3D technology.
What makes Three shades and two uses different from traditional rendering is its non-realistic lighting model. Traditional smooth lighting values are calculated for each pixel to create smooth transitions; however, Three shades and two uses animates shadows and highlights to be displayed as blocks of color rather than in a graded smooth blend, making the 3D model look more flat.
Consoles now have more rendering power than ever before, but a good video game does not necessarily require very realistic graphics, as in the case of some of the most popular games of recent years, such as Animal Crossing, New Horizons, and Fall Guys, and arguably many famous games that consciously or unconsciously avoid realistic graphics, opting instead for flat effects. Three shades and two uses rendering techniques. | https://www.sheergame.net/three-shades-and-two-uses-roles-product/ |
Protecting Steel Bridges
The Norwegian Public Roads Administration (NPRA) has been using zinc thermal spray with a paint topcoat (duplex coating) for corrosion protection of steel bridges since the 1960s. They consider this coating strategy to be a major success and have examples of bridges that have been exposed for 40 years without any maintenance. According to NPRA, “the optimum time for maintenance of duplex coatings is when the topcoat is degraded, before the zinc coating starts to corrode. This will give the lowest life cycle costs.” This first maintenance operation is typically performed after about 30 years, which is much longer than the lifetime of a 3 layer paint system due to the synergistic factor of a duplex coating which increases the lifetime of the duplex coating significantly.
Another convincing argument in favor of thermal spraying bridges comes from the U.S. Federal Highway Administration (FHWA). In the FHWA evaluation, 47 coatings, including metalized coatings of sealed and un-sealed aluminum, zinc, and zinc-15 aluminum were compared with liquid paint coatings, and various combinations of epoxies and urethanes. The study concluded: “Metalized systems consistently provided the best corrosion protection performance. All metalized coatings tested showed no corrosion failure in the aggressive, salt-rich environments over the 5 – 6.5 year exposure periods.”¹
1. Federal Highway Administration’s Report FHWA-RD-96-058 “Environmentally Acceptable Materials for the Corrosion Protection of Steel Bridges” January 1997. | https://thermalsprayzinc.zinc.org/2017/08/29/bridges/ |
This paper presents a unique approach to examine the performance of constructed concrete bridges in cold regions, based on a combined statistical analysis and geographic information system (GIS) method. A total of 3,013 bridges and 1,126 bridge decks selected from the State of North Dakota (one of the coldest regions in the United States) are analyzed. Detailed technical information of the examined bridges is obtained from the National Bridge Inventory (NBI) database constructed between 2006 and 2007. A statistical analysis is conducted to identify the critical sources of bridge deterioration in cold regions, in particular concrete bridges, using the ordinary least-square multiple regression method. The performance of concrete bridges under cold weather is in general satisfactory, while the deck slabs are the critical structural members and may require regular maintenance and repair. The contribution of the year-built and the presence of water are the most critical factors to the bridge deterioration. A case study is presented based on a 29-span bridge consisting of cast-in-place deck slabs supported by prestressed concrete and steel plate girders. Detailed inspection results are reported and adequate maintenance methods are discussed. | https://yonsei.pure.elsevier.com/en/publications/performance-evaluation-and-maintenance-of-concrete-bridges-in-col |
America's roads, highways and freeways invariably require bridges to cross over canyons, rivers and other uneven terrain, and as strong as the bridges may seem to be at the time of their construction, the best of engineers have not yet been able to build corrosion-proof bridges. There are numerous studies that have been conducted as to how to best repair the damage to concrete slab / steel girder bridges, and their results, taken in their entirety, provide solid background for further research into these issues.
When is it time to apply repair technologies to an aging bridge? An example is provided by University of Illinois at Urbana-Champaign engineers. Bernhard, et al., propose the use of wireless technologies (a wireless embedded sensor system) to detect corrosion in concrete girders. The sensing mechanisms that are embedded into the girders are "active acoustic transducers" - and through the use of antenna sticking out of the girder, information about what is going on inside the girder can be transmitted to engineers on a constant basis.
There are currently over 600,000 highway bridges in the United States; and of those, "many" are "severely deteriorated" and in desperate need of major infrastructure repair, according to an article in the Journal of Structural Engineering (Enright, et al., 1998). Enright explains "Experience has demonstrated that highway bridges are vulnerable to damage from environmental attack," including freeze-thaw, salt corrosion, and "alkali-silica reaction." The Enright article provides research into "time-variant reliability methods" regarding "bridge life-cycle cost prediction."
Bridges are naturally expected to - and designed to - function safely over "long periods of time," Enright argues. And during those years of service the concrete bridges are fully expected to stay sturdy notwithstanding "aggressive" and "changing" environments. The bridge that was analyzed in this research is located near Pueblo, Colorado; it was a reinforced concrete T-beam (built in 1962; bridge L-18-BG). This bridge is made of three 9.1-meter (or 30 ft.) "Simply supported spans."
Each of the three spans has five girders "equally spaced" at 8.5 feet apart. The bridge provides two lanes of traffic heading north; the most intense random moment (pulse) for the bridge is when two heavily loaded trucks drive over it side-by-side. Once "corrosion initiation time" has begun, the reinforcing cross-sectional area begins to decrease; the rate of decrease depends upon the number of reinforcement bars that are indeed corroding. Also, the writer explains, after thorough examination, failure can happen when "the limit state of bending failure by yielding of steel of any one (or more) of the girders is reached." The need for competent repairs at this time is obvious and crucial.
Researchers M. Tavakkolizadeh and H. Saadatmanesh, in the Journal of Structural Engineering, suggest that conventional applications used in strengthening "substandard bridges" are not only "labor intensive," but they cost more and they take more time to apply to the deteriorating bridge. The National Bridge Inventory (NBI) showed that in 2003 there were 81,000 "functionally obsolete bridges" in the U.S. Moreover, of those 81,000 bridges more than 43% are constructed of steel. The main problems associated with steel bridges, the authors write, is that they corrode, are not maintained properly, and they suffer "fatigue" over the years.
What to do with these failing and faulty bridges? It is recommended that steel-concrete composite bridges be beefed up with concrete composite girders, and by introducing fiber-reinforced plastics (FRP) (made of glass, carbon, and Kevlar) "...placed in a resin matrix," the article continues. Using FRP materials means that when FRP is applied to a deteriorating girder, the laminates weigh "less than one fifth of the steel and are corrosion resistant."
The authors explain that traditional rehabilitation of bridges uses five techniques. One, simple strengthening of members; two, placing addition member (girder) in the bridge; three, "developing composite action"; four, "producing continuity at the support" structures; and five, "post-tensioning." The downside? "They do not eliminate the possibility of reoccurrence."
Tavakkolizadeh offers the example of welded steel cover plates used to beef up existing structures. The main problem with this kind of solution is that the welded plates are subject to the same fatigue that the original girder suffered. There may be "galvanic corrosion between the plate and existing member and attachment materials" (Tavakkolizadeh, et al., 2003).
The authors tested three large-scale girders that had been strengthened with "pultruded carbon fiber sheets." And then three identical girders were strengthened with "one, three, and five layers" of CFRP sheets. There was delamination in evidence so viscous epoxy was used to bond the laminate to the steel surface. Those girders were put through a series of tests at varying load levels. The conclusions they reached showed that when steel-concrete girders were retrofitted with epoxy-bonded CFRP laminates the ability of a concrete-steel composite girder to carry a heavy load is increased by 44% (one layer); 51% (three layers of epoxy-bonded CFRP applied); and with five layers the ability of the girder to handle a heavy load is raised by 76%.
The Journal of Structural Engineering author C.Q. Li sets out to develop new models of structural resistance deterioration used in "whole life performance assessment of corrosion-affected concrete structures." He mentions the lack of satisfaction regarding the results of 30 years of research. He points to "Concrete in the Ocean" in the UK; BRITE in Europe, and in North America, SHRP studies. These studies, Li claims, did not go deeply enough into the effect that corrosion has on structural deterioration.
Li's model goes beyond the existing Tuutti model ("the well-known" model that assesses and predicts the service life of corrosion-affected concrete structures). In his tests, Li used a total of 30 specimens that consisted of a variety of different concrete compositions (different water cement rations and cement types). He put the tested samples under "simultaneous loading and salt spray" conditions, which were simulated in a big "corrosive environmental chamber" that was constructed exclusively for Li's research purposes.
It was determined that the corrosion growth was directly related to "crack distribution" as well as the pattern within the test sample itself. What did this prove? Corrosion is "essentially a local activity at the cracked sections of RC members," Li indicates, which is "very important to structural engineers" concerned with the "cross-sectional capacity of structural members."
The variables that Li alluded to as difficult to put a precise finger on (he used the "phenomenological approach") authors M.B. Anoop and K. Balaji Rao called "fuzzy variables." They attempt to establish a model which compares the times "to reach different damage levels" for a "severely distressed beam." The beam is located in the Rocky Point Viaduct and the point of the research is to provide a model that helps determine when to schedule inspections for reinforced concrete girders attacked by "chloride-induced corrosion."
Why use "fuzzy" variables? For one, the authors contend Fuzzifying offers "greater generality"; two, "higher expressive power"; three, an "enhanced ability to model real world problems"; and four, "a methodology for exploiting the tolerance for imprecision."
Anoop and Rao insist that in projecting a life assessment procedure using their strategy if offers a chance to measure environmental aggressiveness factors. The Rocky Point Viaduct is located near Port Orford, Oregon about 25 miles due east of the Pacific Ocean. The viaduct has five spans (each with a length of 114 m and a deck width of 10.6 m). It was built in 1955; the initial report of problems through the maintenance inspection was 12 years later in 1967. In May of 1968 cracking was noticed on the concrete beams, and a year later, January 1969, inspectors noticed "badly rusted rebars" and "spalling" of the concrete.
The first repairs were made in September 1969; in May 1976 a "substantial" portion of one section had been lost due to corroded rebars; and by February 1991 the decision was made to replace the structure entirely, the authors explain. What the authors offer in this article is a study focused on the beam on the extreme western edge of the viaduct, the part closest to the ocean and most fully exposed to the "impact of the weather from the ocean." At this point in their article the authors state that by researching the failed portions of this bridge, applying their strategies - combined with the definitions of damage levels - they can (with reasonable accuracy) predict the window of time at which various levels of damage can be expected on future bridges using concrete girders. That is to say, they can predict when repairs will be needed.
What's the advantage of using fuzzy sets? Using this strategy, research engineers can project the known, reported times of corrosive impacts for different levels of damage on future bridges; that will give them the notice in advance as to when bridge repairs should commence.
"Steel Girder/Concrete Slab Bridge Repair Methods." 25 April 2008. Web. 26 April 2019. <https://www.essaytown.com/subjects/paper/steel-girder-concrete-slab-bridge-repair/3434603>.
"Steel Girder/Concrete Slab Bridge Repair Methods." Essaytown.com. April 25, 2008. Accessed April 26, 2019. | https://www.essaytown.com/subjects/paper/steel-girder-concrete-slab-bridge-repair/3434603 |
What is a Stressed Ribbon Bridge?
What is a Stressed Ribbon Bridge?
Stressed ribbon bridges are a tensile structure that is based upon a suspension bridge mechanism but the deck is directly loaded on the tensioned catenary-shaped cables suspended in between two abutments.
Also known as ‘catenary bridges’, these bridges are stressed due to tension, unlike simple spans which subsequently increase the structural stiffness. For multiple spans, the bridge supports hold the upward thrusting arcs which permit a change in grades, in between the spans
Stressed ribbon bridges are generally constructed using a special type of reinforced prestressed concrete where steel tensioning cables are used as reinforcement in the concrete. These high-strength cables combined with prestressed concrete provide adequate stiffness to maintain the shape of the bridge against oscillations and overturning movements.
To allow the movement of vehicles on stressed ribbon bridges the concrete is stressed in compression that is achieved by inducing an adequate degree of stiffness in the bridge structure to avoid excessive structural bending.
This is one of the oldest bridges that have been used as an alternative cost-effective solution to heavy and expensive bridge constructions in European countries. Stressed ribbon bridges have many advantages such as they can be easily assembled and constructed, demand the least falsework, self-weight is very low, require the least long time maintenance, and gel with the environment quite well. | https://www.getpowerplay.in/faq/what-is-a-stressed-ribbon-bridge/ |
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
Wire Arc Spraying of Zinc as Effective Method to Produce Anodes for the Corrosion Protection of Reinforced Concrete Structures
ITSC 2002, Thermal Spray 2002: Proceedings from the International Thermal Spray Conference, 27-31, March 4–6, 2002,
AbstractPDF
Abstract Corrosion damage to concrete structures is widespread and usually requires extensive repair and maintenance work. The installation of active corrosion protection systems is therefore a useful tool for preventing corrosion. The application of thermally sprayed zinc layers on the concrete surface, which are used as anode to protect the steel reinforcement, was transferred from steel construction. Such systems are advantageous because of their simplicity and low cost. These systems can already be used in novel construction and after the repair of concrete structures. In this paper, novel results from long-term studies are presented. The result of the investigations suggest that sprayed zinc coatings are practically always suitable whenever structural elements are to be protected by cathodic protection. Paper includes a German-language abstract.
Proceedings Papers
Arc Spraying for the Production of Multi-Component Zinc Anodes for Corrosion Protection of Concrete Structures
ITSC2000, Thermal Spray 2000: Proceedings from the International Thermal Spray Conference, 1045-1049, May 8–11, 2000,
AbstractPDF
Abstract Structural damage in concrete structures caused by corrosion is widespread and demand comprehensive repair work. The additional installation of an active corrosion protective systems for structures being located in unfavourable conditions is imperative. Thermally sprayed coatings serving as anode have been adapted from the cathodic protection of steel. These systems have gained attention as they offer advantages in efficiency and lower cost. Thermally sprayed zinc coatings are applied to new steel reinforced concrete structures or those which are subject to re-structuring. In this contribution, the capability of various systems is examined by field tests in a marine structure and in different laboratory tests under natural and under accelerated conditions.
Proceedings Papers
ITSC1997, Thermal Spray 1997: Proceedings from the United Thermal Spray Conference, 141-150, September 15–18, 1997,
AbstractPDF
Abstract Thermal-sprayed titanium coatings were investigated as anodes for impressed current cathodic protection systems for steel reinforced concrete structures. The coatings were applied by twin-wire thermal-spraying using air and nitrogen as atomizing gases. The coatings were non-homogeneous due to oxidation and nitridation of the molten titanium with the atmospheric gases oxygen and nitrogen. The primary coating constituents were α-Ti (containing interstitial nitrogen and oxygen), γ-TiO and TiN. Nitrogen atomization produced coatings with less cracking, more uniform chemistry, and lower resistivity than air atomization.
Proceedings Papers
ITSC1997, Thermal Spray 1997: Proceedings from the United Thermal Spray Conference, 151-160, September 15–18, 1997,
AbstractPDF
Abstract Steel-reinforced concrete slabs coated with a thermal-sprayed titanium anode were used to simulate impressed current cathodic protection systems. The titanium anodes were activated with a cobalt nitrate catalyst and subjected to accelerated electrochemical aging representing approximately 23 years at 0.00215 A/m 2 (0.2 mA/ft 2 ). During the aging experiment, current was kept constant at 0.0215 A/m 2 (2 mA/ft 2 ), voltages were recorded, and water was applied periodically when voltages exceeded compliance levels. At the end of the experiment, coating resistivity, adhesion strength, and titanium-concrete interfacial chemistry were determined. Results show that the coating resistivity increases and adhesion strength decreases with electrochemical aging. Voltages for the slabs varied with the relative humidity. Electrochemical reactions at the titanium-concrete interface caused deterioration of the cement paste by leaching of calcium compounds. Accelerated aging results are compared to similar ones for an uncatalyzed titanium anode and to results from the Depoe Bay Bridge.
Proceedings Papers
International and National Thermal Spray Standards Program with Comment on Those for the Corrosion Protection of Steel and Reinforced Concrete
ITSC1996, Thermal Spray 1996: Proceedings from the National Thermal Spray Conference, 203-205, October 7–11, 1996,
AbstractPDF
Abstract A review of selected national and international thermal spraying guides and specifications for the preservation of steel and reinforced concrete using thermal spray coating of aluminum, zinc and their alloys is presented. The work program and current status of the US national organizations contributing to and developing test methods and process standards are summarized along with those of ISOATC 107/SC 5. The Secretariat of the ISO/TC 107/SC 5, Thermal Spraying was transferred from AFNOR, France, to ANSI, US, in June 1995. ANSI, in turn, designated AWS to be its delegate in thermal spray matters. The work program of the newly formed SSPC/NACE/AWS Tri-Society Committee on thermal spray coatings for the corrosion protection of steel is summarized.
Proceedings Papers
Use of Combined Plasma Cleaning/Coating Process for the Cathodic Protection of Reinforced Concrete Structures
ITSC1996, Thermal Spray 1996: Proceedings from the National Thermal Spray Conference, 217-220, October 7–11, 1996,
AbstractPDF
Abstract Considering the number of concrete structures such as bridges, overpasses, trestles etc., their maintenance and repair form a significant part of the highway administration budget. Cathodic protection is becoming more popular because it helps reduce maintenance and renovation costs. Arc-sprayed zinc and zinc/aluminum alloy coatings are widely used in cathodic protection systems. The surface preparation of concrete is critical to the quality of coating and hence, the quality of the cathodic protection. Typically, sandblasting with surface brushing is used as preparation. This method has several technical, economic and ecological deficiencies: weather/humidity limitations, difficult removal of organic contaminants from the surface, an irrevocable loss of blasting media, high dust level, etc. An objective of this proceeding is to describe a plasma cleaning process as a successful alternative to sandblasting and to show the possibilities of combined plasma cleaning/coating process for the cathodic protection of reinforced concrete structures. This environmentally friendly process will result in better anodic coatings at lower cost and fewer concrete structure repairs. | https://dl.asminternational.org/itsc/search-results?page=1&tax=1647 |
This is an introduction to the factors to consider in inspecting and repairing steel bridges.
Course Summary
This course will help you to address a variety of bridge inspection situations that may be encountered.
Course Description
This course discusses steel bridge maintenance and repair. Preventive maintenance of steel bridge components consists mainly of measures to protect the steel from corrosion. When deicing salt is added to the electrolyte, there is a dramatic increase in the rate of corrosion of the structural steel. Corrosion is usually easily spotted by visual inspection. Protection from corrosion can take various forms. The physical condition of the structure must first be determined by a detailed inspection. The structural capacity of the steel should be known. Once the physical condition of the bridge is evaluated, a determination of whether damaged bridge components should be repaired or replaced is made.
Course Outline
Preventive Maintenance for Corrosion
1. INTRODUCTION
2. STRUCTURAL STEEL
Repair and Strengthen
3. GENERAL REPAIR
4. CONNECTIONS
5. REPAIR OF STRUCTURAL MEMBERS
Member Replacement
6. TENSION MEMBERS
7. COMPRESSION MEMBERS/COLUMNS
8. BEAMS
Upgrade Steel Bridges
9. CREATION OF A COMPOSITE ACTION
10. POSTTENSIONING
11. TRUSS SYSTEMS
Learning Objectives
Upon completion of this course you will:
• Learn how many state highway departments have indicated poor performance from their weathering bridges;
• Learn the temperature recommendations for applying paint to steel bridge elements;
• Learn approaches to making repair decisions which must carefully weigh the long-term operational requirements and existing environmental factors that can help accelerate corrosion prior to evaluating initial and life cycle costs;
• Learn how to properly analyze and design repairs that involve adding metal to steel bridge members and elements:
• Learn how members to be strengthened must be investigated for any decrease in strength resulting from temporary removal of rivets, cover plates, or other parts; and
• How to repair loose or missing rivets.
Intended Audience
This course is intended for civil engineers and bridge maintenance managers interested in learning about bridge inspection and repair techniques.
Benefit for Attendee
This course will give civil engineers and bridge managers advice and recommendations on bridge repair techniques. | http://engineeringcontinuingeducationpdh.com/462/41/structural-engineering/steel-bridge-maintenance-3-pdh-detail.html |
+44 (0)115 9724 238
It is widely understood that the use of galvanic anodes within concrete patch repairs can extend repair life and prevent adjacent corrosion initiation when the surrounding concrete is contaminated with chlorides. A further sign of the acceptance of galvanic anode technology is the publication of the latest Design Manual for Roads and Bridges (DMRB). The DMRB incorporates the Manual of Contract Documents for Highway Works (MCHW) which includes Specification Series 5700 and, in particular, Clause 5712 – a standard specification for the use of galvanic anodes within patch repairs. This long-awaited update removes the requirement for highways engineers to prepare a Departure from Standard prior to specifying galvanic anodes.
The Design Manual for Roads and Bridges also incorporates Standard for Civils Work CS462 which spells out the options for dealing with sound but chloride contaminated concrete when carrying out patch repairs to concrete bridge structures: leave in place and continue to monitor, remove the contaminated concrete, install an impressed current cathodic protection system or incorporate galvanic anodes into patch repairs to prevent corrosion initiation and associated concrete spalling around the repair boundary. In many cases chloride contamination is localised and may be isolated to areas of low cover, splash zones or joints. In such cases impressed current cathodic protection may be considered a disproportionate response with high initial costs and burdensome on-going maintenance requirements. Breaking out all contaminated concrete can also prove to be expensive and disruptive, especially if propping is required. In contrast, the inclusion of galvanic anodes into patch repairs is quick, inexpensive and maintenance free.
CPT PatchGuard galvanic anodes are described as ‘Type 1b’ in Specification Series 5700 Clause 5712. Type 1b anodes are grouted into small drilled holes around the inside boundary of patch repairs so that the anode is located within the contaminated concrete around the patch. In contrast, ‘Type 1a’ anodes are tied directly to the exposed steel in the broken-out area and end up being embedded within the patch repair material. Independent research by Loughborough University, ‘Site performance of galvanic anodes in concrete repairs’, demonstrated that Type 1b anodes have a more significant, and longer lasting, polarising effect on the reinforcing steel around the patch repair than Type 1a anodes. This is because Type 1b anodes are located in a lower resistivity environment within the host concrete and closer to the steel requiring protection. Because they are installed into drilled holes, Type 1b anodes are also more able to protect inaccessible steel such as that typically found around half joints.
Many Highways England structures are already benefitting from PatchGuard protection, having been specified through the Departure from Standard procedure, or by use of a draft version of clause 5712. Recent projects include Windsor Road Bridge on the M4 and Portbury Railway Tunnel on the M5. Contact CPT for further details.
Those with responsibility for managing highway structures can now quickly specify galvanic anode patch repair protection by incorporating Clause 5712 from the Manual of Contract Documents. DMRB, CS462 and MCHW Series 5700 are also approved by the devolved administrations in Scotland, Wales and Northern Ireland.
CPT’s technical team are on hand to offer advice and assistance including a BS EN 15257:2017 Level 4 cathodic protection design service. Use the form below to contact the team with any questions.
Share this article: | https://cp-tech.co.uk/highways-england-approve-galvanic-anodes-for-concrete-patch-repairs/ |
Our company specializes in the professional removal of existing corrosion and protection from further corrosion.
The harsh environmental conditions and stringent Regulatory and Client Safety requirements in Offshore Facilities necessitate appropriately trained professionals and an uncompromising attitude to safety.
Recent technical and regulatory developments have rendered Renewable sources of energy economically competitive and created a whole new range of business opportunities.
In the field of corrosion protection, we provide sound maintenance measures for steel surfaces in a cost-effective and environmentally friendly manner. We engage with different sectors, such as industrial facilities, hangars, bridges, pipelines, tanks, and containers.
With our long history in the Marine sector and our skilled professional corrosion protection teams, we are ready to meet even the most challenging project requirements.
The Yacht is always seen as an exclusive and prestigious object and must be developed and processed as such. | https://sk-korrosion.de/en/home/ |
To calibrate methods for condition assessment of prestressed concrete (PC) bridges, tests were carried out on a 55 year old five-span bridge with a length of 121 m in Kiruna in northern Sweden. Both non-destructive and destructive full-scale tests were performed. This paper presents results regarding the residual forces in the prestressed reinforcement.
Key words: Assessment, Cracking, Corrosion, Creep, Modelling, Pre-stress, Reinforcement,
Structural Design, Sustainability, Testing.
1. INTRODUCTION 1.1 General
Assessment, repair and strengthening of existing bridges are required in order to meet current and future demands on sustainability of existing infrastructure. For instance, a survey carried out within the project MAINLINE (2015) , indicated a need in Europe for strengthening of 1500 bridges, replacement of 4500 bridges and replacement of 3000 bridge decks. It is believed that for some of these structures replacement can be avoided if more accurate assessment methods are used, see e.g. Sustainable Bridges (2007) , Nilimaa et al. (2016) and Paulsson et al.(2016) .
Figure 1. The Kiruna Bridge with road E10 to the right and Iron Ore Railway line to the left (2010-03-23).
1.2 The Kiruna Bridge
The test object, located in Kiruna, Sweden, was a viaduct across the European road E10, see Figure 1. It was constructed in 1959 as a part of the road connecting the city centre and the LKAB mining area. Due to underground mining, the entire area underwent excessive settlement. The owner of the bridge, LKAB, closed it in October 2013 and planned to tear it down in September 2014. Before demolition, the bridge was tested in a research project which will be discussed below -.
2 PRESTRESSING FORCES
There are some 850 prestressed concrete bridges in Sweden and they represent about 5% of the total amount of bridges maintained by Trafikverket . When assessing prestressed concrete bridges, it is essential to take the current condition of the prestressing system into account. For instance, the quality of reinforcement protection (e.g. grout), steel corrosion and residual prestress force are all aspects that are crucial and require special attention, SB-LRA (2007) . The residual prestress force influences the structural response both at the service-load and ultimate-load levels. By preventing cracks or limiting their formation, prestressing also reduces environmental exposure and, consequently, has a favourable impact on structures in harsh environments. However, there are often many uncertainties associated with the residual prestress force, especially after a longer time in service and, therefore, it can be useful to calibrate theoretically-based methods using experimental data from the assessed structure.
Due to its expected use on full-scale bridge members reinforced with post-tensioned tendons, the saw-cut method was further investigated in Bagge (2017) . The principle of the method is to measure the development of longitudinal strain at the surface (top or bottom) of a member when a block of concrete is isolated from the loads acting on it. The isolation is carried out gradually by introducing transverse saw-cuts on each side of the position of measured strains and the concrete block is regarded as isolated when increasing the depth of saw-cuts does not cause further changes in the strains at the measured surface. The saw-cutting can be simulated in a FE model by gradually removing FE elements corresponding to the saw-cuts in the experiments (see Figure 2). Therefore, using this method, it is possible to avoid any damage to the structure which might be difficult to repair.
Figure 2. A part of a FE model for simulation of the strain distribution as saw-cuts are introduced transversally at the base of a concrete member (Bagge 2017) .
Figure 3 shows the results from the measurements on the south girder in Sections A close to midspan 1, indicating consistent measurements and also an incomplete isolation of the concrete blocks from the acting stresses, which would be characterised by a plateau in the response. In the tests the remaining prestress force varied between some 10 to 85% of the original prestress force of some 85% of the yield stress (a yield stress fy = 1600 MPa for the 32 wires of 6 mm in a
cable in the BBRV System gives an intial prestress force of some 1300 kN/cable).
Figure 3. Measured development of strains as function of saw-cut depth for non-destructive determination of the residual prestress force in the south girder, Left: Section A close to midspan 1. Right: Section D at midspan 3. Bagge (2017) .
4 SUMMARY AND CONCLUSIONS
Tests have been carried out to check the remaining prestress level in 55 year old bridge. The level was found to vary between some 10 and 85 % of the original force of about 1300 kN/cable. Further research on assessment methods and life-cycle analysis of bridges should be carried out in order to get a more sustainable management.
ACKNOWLEDGEMENT
The authors gratefully acknowledge financial support from Trafikverket/BBT, LKAB/HLRC, SBUF and LTU. They also thank colleagues in the Swedish Universities of the Built Environment (Chalmers University of Technology in Göteborg, the Royal Institute of Technology (KTH) in Stockholm and Lund Institute of Technology (LTH) in Lund) for fruitful
cooperation. The experimental work and a previous monitoring campaign was carried out in cooperation with staff of Complab at Luleå University of Technology.
REFERENCES
MAINLINE. A European FP 7 research project 2011-2014 with 19 partners with the full name: MAINtenance, renewaL and Improvement of rail transport iNfrastructure to reduce Economic and environmental impacts. A project summary and 27 deliverables can be downloaded from www.mainline-project.eu
Sustainable Bridges – Assessment for Future Traffic Demands and Longer Lives. A European FP 6 Integrated Research Project during 2003-2007 with 32 partners. Four guidelines and some 35 background documents are available at www.sustainablebridges.net.
Nilimaa, J., Bagge, N., Häggström, J., Blanksvärd, Th., Sas, Gabriel., Täljsten, B., Elfgren, L. (2016). ”More realistic Codes for Exisiting Bridges”. 19th IABSE Congress, Stockholm 21-23 September 2016. Int. Assoc. for Bridges and Structural Engineering (IABSE), Zürich. Ed. by L. Elfgren, J. Jonsson, M. Karlsson, L. Rydberg-Forssbeck & B. Sigfid, pp 399-407, http://ltu.diva-portal.org/smash/get/diva2:1073956/FULLTEXT01.pdf Paulsson, B., Bell, B., Schewe, B., Jensen, J. S., Carolin, A. & Elfgren, L. (2016).
“Results and Experiences from European Research on Railway Bridges” 19th IABSE Congress, Stockholm, 21-23 September 2016. Zürich, pp. 2570 – 2578. ISBN 978-3-85748-144-4. http://www.diva-portal.org/smash/get/diva2:1015045/FULLTEXT02.pdf Bagge Niklas (2014). Assessment of Concrete Bridges: Models and Tests for Refined
Capacity Estimates. Licentiate thesis. Luleå: Luleå University of Technology; 2014, 132p. http://www.diva-portal.org/smash/get/diva2:999497/FULLTEXT01.pdf
Bagge Niklas (2017). Structural Assessment Procedures for Existing Concrete Bridges. Implementation of experiences from tests to failure of the Kiruna Bridge. PhD thesis. Luleå: Luleå University of Technology; to be published.
Bagge, N., Nilimaa, J., Blanksvärd, T., & Elfgren, L. (2014). “Instrumentation and full-scale test of a post-tensioned concrete bridge”. Nordic Concrete Research, 51, 63-83. Bagge, N., & Elfgren, L. (2016). Structural performance and failure loading of a 55
year-old prestressed concrete bridge. Paper presented at the 8th International Conference on Bridge Maintenance, Safety and Management (IABMAS), Foz do Iguaçu, Brazil.
Bagge, N., Nilimaa, J., Blanksvärd, T., Täljsten, B., Elfgren, L., Sundquist, H., and Carolin, A. (2016). Assessment and failure test of a prestressed concrete bridge. In "Life-Cycle of Engineering Systems: Emphasis on Sustainable Civil Infrastructure", Ed. by Bakker, Frangopol & van Breugel, London, Taylor & Francis Group, ISBN 978-1-138-02847-0, paper C134, Abstract p 194, Full paper on USB card pp. 1958-1063.
Bagge, N., Nilimaa, J., & Elfgren, L. (2017). In-situ methods to determine residual prestress forces in concrete bridges. Engineering Structures, 135, 41-52.
Nilimaa Jonny (2015) Concrete Bridges: Improved Load Capacity. Doctoral Thesis. Luleå: Luleå University of Technology; 2015, 176 p. http://www.diva-portal.org/smash Eriksson, Marika & Kjellholm, Linda (2005). Funktionsupprätthållande åtgärder för
förspända betongbroar. En fallstudie av Nötesundsbron, (Maintenance of prestressed concrete bridges. A case study of Nötesund bridge. In Swedish) Examensarbete 2005:7, Betongbyggnad, Chalmers tekniska högskola, 102 sid. http://publications.lib.chalmers.se/records/fulltext/3783.pdf
SB-LRA (2007). "Guideline for load and resistance assessment of existing European railway bridges." Sustainable Bridges, 428 pp. www.sustainablebridges.net. | https://5dok.org/document/7qv45ngq-assessment-concrete-bridges-prestress-experiences-testing-failure-kiruna.html |
The Florida Department of Transportation discovered some corrosion of the Mickler-O'Connell Bridge on State Road 312 during routine inspections last year.
Spanning the Matanzas River, south of downtown St. Augustine, the 42-year-old bridge sits in what is essentially a narrow salt-water estuary sheltered from the Atlantic Ocean by Anastasia Island.
Hampton Ray, a spokesman for FDOT, told The Record on Wednesday the bridge is healthy, but the department keeps a close eye on bridges in these types of punishing waters.
"If a bridge is ever deemed unsafe or anywhere even close to that, we have no problem shutting down the bridge," he said. "It's a healthy bridge and we would never put anyone in harm's way when it comes to that."
Ray said the corrosion study was commissioned by FDOT's State Materials Office and carried out by Florida International University's Civil and Environmental Engineering program. He said the study was not directed in response to any specific concerns with the bridge, built in 1976, but rather to stay well ahead of any potential issues.
The study focused on the bridge's submerged steel H-piles. Scientists with FIU specifically looked into the possibility of microbiologically-influenced corrosion.
According to an article posted Jan. 12 to FIU's web page for its Civil and Environmental Engineering program, university scientists say microorganisms tend to adhere to most surfaces in contact with water. When reproduced, these microorganisms create exopolymers or biofilm that can influence the chemistry of the surface to which they are attached and, through that interaction, can degrade the material.
However, university scientists say while there is evidence of damage from microorganisms, more research is needed to determine the exact cause or causes of the corrosion of the bridge's H-piles.
Ray said the department has strict, stringent standards for bridges across the state and that the materials office is constantly looking at ways to make Florida's bridges and infrastructure more durable in corrosive environments.
"When it comes to corrosion, it is something we do prepare for and we do expect," he said. "It's certainly not uncommon for any bridges or any infrastructure that's in brackish kind of water to see that kind of thing. It's not surprising, necessarily, that there's corrosion of a bridge."
Ray said all bridges are built to standard, which varies based on use, and that all bridges are inspected a minimum of every two years.
"We do a really intensive, deep-dive review of the bridge every two years," he said. "That includes divers actually going down and inspecting the piles. They would see the corrosion there."
Ray said studies such as the Mickler-O'Connell bridge study are not uncommon. He said they're mostly valuable to engineers, who can consider and review the research to better design and protect infrastructure.
"We have a lot of challenges when it comes to our infrastructure," Ray said. "Whether it's brackish water, hurricanes, or regular day-to-day use of semi-trucks down to your regular, local commuters."
According to an FDOT bridge maintenance report dated Jan. 2, the Mickler-O'Connell Bridge was last inspected May 17, 2016, receiving a sufficiency rating of 81.7. No health index was provided.
The same report from April 1, 2016, predating the most recent inspection, indicated the bridge received a sufficiency rating of 82.4 after an inspection May 19, 2014. It had a health index of 84.62.
FDOT says the sufficiency rating is a tool used to determine whether a bridge that is structurally deficient or functionally obsolete should be repaired of just replaced. This rating considers a number of factors, only half of which relate to the condition of the bridge itself.
"The sufficiency ratings for bridges are part of a formula used by the Federal Highway Administration when it allocates federal funds to the states for bridge replacement," the department's definition reads.
The health index, in the meantime, measures the overall condition of a bridge and typically includes up to a dozen elements that are evaluated by FDOT. A lower health index means more work would be required to improve the bridge "to an ideal condition."
The department's definition says a health index below 85 would generally indicate "some repairs are needed, although it doesn't mean the bridge is unsafe."
Both reports indicated an average daily traffic count of 18,750. Ray said the most recent data available was probably from 2015 or 2016, and would represent average daily traffic over one year.
The study, published March 2017, was not readily available from FDOT Wednesday afternoon, although an abstract was provided. | https://www.staugustine.com/news/20180125/despite-some-corrosion-of-mickler-oconnell-bridge-fdot-deems-structure-healthy |
Layoffs and the shift to remote operations in response to the impact of COVID-19 created a confusing job market for tech professionals, and the road forward can be hard to see. Getting an old job back might not be possible and keeping a position could present its own challenges. The nature of a role might change dramatically and require new skills to match the new landscape. Organizations such as DevOps Institute and ADP have been weighing questions about what will become of the tech workforce as jobs are redefined in the next phase of dealing with the pandemic.
Human resources software and services company ADP's ADP Research Institute in June released a research report on the redefined workplace under COVID-19. The report covers the overall workforce but does speak to issues IT teams have had to cope with. According to the ADP Research Institute report, early into the pandemic, about 50% of the workforce represented in the research had trouble with remote access, connectivity, and other technology functions related to operating away from traditional workspaces. Such setbacks continued for at least eight weeks after the initial mass migration to remote work.
“We are facing uncertainty,” says Ahu Yildirmaz, co-head of ADP Research Institute. “We don’t know how long the pandemic impact will continue.”
Each IT professional likely faces unique dilemmas related to the pandemic, but there are resources that might help them consider new ideas to deal with that uncertainty. That can include the DevOps Institute’s SKILup Days, one-day online conferences that focus on industry practice areas IT professionals want to better understand. The topics might point to new directions IT professionals may consider for reskilling and upskilling to pursue new and evolving career opportunities.
ADP’s Yildirmaz says tech workers face different circumstances compared with many other types of workers and their transition to a new normal might take alternate routes. For instance, tech workers in some instances could operate indefinitely from home, which may introduce new long-term considerations for continued remote access. Teams that once operated collectively in-person might have to face long-term solitiude. “One of the things we need to focus on is social distancing doesn’t necessarily mean social isolation,” Yildirmaz says.
Workers need to change industries, including to different tech disciplines, and may need to consider upskilling to meet new demands, Yildirmaz says. New entrants to the tech workforce might also emerge from other industry sectors, potentially increasing competition for employment. “We see this among low-skilled workers trying to find jobs, which require more skills,” she says. Coding knowledge, Yildirmaz says, might be sought by workers from non-tech careers.
The pandemic accelerated demand for certain tech skill sets in unexpected ways, says Jayne Groll, CEO of the DevOps Institute, which also changed the transformation timetable for many organizations. Some companies had to cover for staff affected by COVID-19, she says, then soon discovered if they had sufficiently trained other team members to take on those tasks. “We knew there was a talent gap,” Groll says. “There was a lot of talk about full-stack engineers and then the pandemic hit and revealed other challenges.”
DevOps Institute had the SKILup Days in the works prior to the outbreak of the pandemic, she says, and adjusted to meet the growing interest in remote discussions on subjects and skills the market needs. Topics covered in the remote sessions include enterprise Kubernetes, DevSecOps, and site reliability engineering. “We’re watching the world go virtual,” Groll says.
As the world approaches a new form of normal, she says she will watch where organizations invest to update their teams’ skills. Groll says tech training could become a blend of formal courses, peer-to-peer training, and more online learning opportunities, she says. “No other event could have shown organizations how important reskilling and upskilling are.”
Those skills may need to reflect new tech demands companies might confront. Groll says the focus of management within organizations could shift focus to such areas as observability and cyber liability, for example. The move to remote operations meant addressing security and connectivity for large numbers of employees, Groll says. As migration to the cloud continues while still dealing with the effects of COVID-19, new expertise may be required to keep up. “When we look at operational modes, this situation will require new skills,” Groll says. “This has pushed us two to five years ahead.”
For more content on IT skills, follow up with these stories: | https://www.informationweek.com/strategic-cio/team-building-and-staffing/should-it-professionals-retrain-for-a-new-normal/a/d-id/1338085?print=yes |
Posted: Mon 14th Nov 2022
The skills gap is being felt acutely in organisations across the UK and beyond.
It’s the result of a complex interplay of contributing factors, exacerbated by rapid changes in the workplace that are now being termed The Fourth Industrial Revolution.
Add the recent Covid-19 pandemic, the current cost of living crisis, the ongoing war in Ukraine and the emerging implications of Brexit into the picture, and the scale of skills shortages is taking on a new urgency that we can no longer ignore.
By way of illustration, more than 80% of employers in the UK construction industry are reporting chronic skills shortages, with 2/3 worrying that there are insufficient skills to cater for the industry’s future needs and developments. And it’s not just major employers who are affected. Small and medium-sized enterprises everywhere are feeling the pinch too.
If your business is impacted and you’ve identified a skills gap in your existing workforce, there are various obvious approaches you can use to tackle the problem.
The chart below shows the strategies employers are using to reduce the skills gap, according to a recent McKinsey report. Over half of those surveyed take the route of building additional skills in their existing team. The report furthermore states that almost 70% of businesses see a direct ROI from investing in learning & development, while many see benefits beyond bottom-line growth.
Improving the skills of your current employees should be an essential part of your strategy to address a skills shortage – but should you be upskilling or reskilling your resources?
Both terms refer to learning new skills – they are two sides of the same L&D coin – but there are key differences we need to understand.
Reskilling focuses on teaching your current employees new skills. A team member may be better suited to a different role in the company but lacks a particular skill.
For employees who are looking for new roles and job opportunities within a growing company, this could be an attractive way to change their career trajectory while remaining loyal to their current employer.
From the point of view of the employer, change will be driven by the operational needs of the business. This could be something as simple as purchasing a company minibus and needing a licenced in-house driver, with learner courses widely available at approved training and test centres.
Or, reskill staff by switching to an e-commerce-based business model and redeploying in-store sales assistants as remote customer service assistants.
Whether you are broadening out into new markets or new offerings, reskilling can be a useful and highly effective way to repurpose employees within the business.
By comparison, upskilling focuses on developing your employee’s existing skill set, either by strengthening previous knowledge or by imparting additional skills.
Ensuring that team members have the right skills and abilities to perform in their roles equips them to feel more confident and grow in their current positions, and enables them to take on extra tasks and responsibilities.
The result is that employees who are able to deal effectively with the rapid speed of transformation that many businesses are experiencing add value to the business.
Digital upskilling is a case in point – see our upcoming webinar on the topic.
From the perspective of the employee, upskilling helps them to develop their potential within their chosen field, expand their marketable skillset and enhance their career progression.
Whether you decide to invest in your existing employees by way of upskilling or reskilling, the potential benefits to the business are clear:
Investing in skills training is an excellent opportunity to boost your company’s competitiveness in today’s market while preparing your business for the future.
Technology plays a key role here. Upskilling and reskilling your team as needed means developing their ability to use the latest tech solutions and software tools.
The result is that, as a business, you can leverage technology to stay not only relevant in the market but at the forefront of it.
Recent industry reports suggest that the cost saving of reskilling or upskilling versus hiring new talent could be as much as £50k per person.
By investing in your existing team and equipping them with the skills necessary to operate in today’s complex, fast-paced and increasingly hybrid work economy, you can sidestep the expensive hiring process.
Stay on top of your team’s learning needs with a continuous development culture to smooth learning curves and training costs going forward.
The other side of the coin is that by providing more opportunities for your team members to grow, their job satisfaction will be enhanced, which ultimately allows the business to retain its top talent.
Opportunities to develop and progress within the company are seen as the key driver of a great workplace culture which, in turn, has a direct link to employee retention. In fact, 93% of employees would stay loyal to their employer if it invested in their careers.
Engaged and motivated employees are good for your company’s performance. Put a different way, disengaged team members are 21% less productive than their peers and 87% more likely to leave their current employer, costing the UK economy in the region of £340 billion a year.
By focusing on upskilling and reskilling your existing employees, you send a clear sign that they are valued, supported and encouraged to shine.
Finally, reskilling and upskilling are obvious answers to address the skills gap in your business. According to recent research, nearly 80% of the largest employers (and nearly 2/3 of the smallest employers) reported a skills gap that adversely affected their business performance.
Bridging talent shortages by investing in L&D rather than hiring new is a more cost-effective and long-term solution that benefits both employer and employee, working together for a successful future.
Connect with Dakota today on Enterprise Nation for more brilliant business support.
Take the first step to successfully starting and growing your business. | https://www.enterprisenation.com/learn-something/reskill-or-upskill-your-team/ |
When it comes to workforce development, learning businesses can play a critical role in identifying and delivering the skills employees need and those that employers want. From attracting and educating workers new to the field to upskilling and reskilling those already working, learning businesses can have a significant impact on the profession or industry they serve.
In this episode of the Leading Learning Podcast, we break down the topic of workforce development, including how we define it, key stakeholders involved, and its history in the U.S. We also look at workforce development today and the related opportunities for learning businesses.
To tune in, listen below. To make sure you catch all future episodes, be sure to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). And, if you like the podcast, be sure to give it a tweet.
Listen to the Show
Access the Transcript
Download a PDF transcript of this episode’s audio.
Read the Show Notes
[00:00] – Intro
What Is Workforce Development?
[00:28] – Workforce development can be a complex topic, so we’ll start by defining it.
This is based on the definition of workforce development used by the Federal Reserve Bank of St. Louis. We like that definition because it is broad. It encompasses the activities done by various players in workforce development, and it covers the range of perspectives represented by those players.
Workforce development players include the following:
- Education providers (including K-12, higher ed, and learning businesses) and social service providers
They tend to approach workforce development from the perspective of the individual. Their approach to workforce development programs is based on the belief that individuals won’t be able to make substantive contributions to society without access to training and education and that an individual’s basic needs must be met for her to contribute to society. Social services figure in, along with job training and education, to position an individual for success in the workforce.
- Employers
They tend to approach workforce development from an organizational perspective, focusing on the skills and training their specific organization needs to be and remain competitive or, more broadly, the skills and training needed by their industry or profession.
- Communities and economic developers
They tend to approach workforce development from the perspective of what benefits the community or region. Workforce development from the community perspective tends to focus on initiatives that educate and train individuals to meet the current and future needs of businesses and industries in a region, and it tends to emphasize local or regional needs.
[04:36] – In some ways the regional or local focus can be good, as a tighter geographic focus facilitates communication and coordination among players. In other ways, the local focus can be limiting when you consider the global nature of much of the economy and the growing prevalence of remote work.
There has also been growth in a more nomadic workforce. If workers can work remotely, they don’t necessarily have to stay in one area. They can move around while remaining employed at the same place. These more recent trends toward remote work and a more nomadic workforce are interesting to consider in light of the fact that workforce development tends to be decentralized by nature.
In an ideal scenario, the individual, organizational, and regional perspectives of workforce development would have significant overlap.
The History of Workforce Development
To add some details to our definition, here are some workforce development facts from U.S. history:
- FDR’s New Deal (passed in the 1930s) as a result of the Great Depression is commonly viewed as the start of federal workforce development legislation. During the eight years that this program existed, it generated more than 8.5 million jobs nationwide.
- President Clinton signed the Workforce Investment Act (WIA) in 1998 during a period of full employment. It focused on the delivery of workforce development programs and related services through a nationwide network of community-based, one-stop career centers. It gave individuals a single location where they could go and access workforce programs and services. WIA created workforce investment boards, led by businesses, to develop local strategies based on labor market data and to oversee programs in their communities.
- In 2014, President Obama signed the Workforce Innovation and Opportunity Act (WIOA), which reauthorized the workforce investment system and replaced the Workforce Investment Act of 1998. WIOA took effect on July 1, 2015, and states and local workforce development boards are implementing the act into the present day.
To learn more, check out The History of Workforce Development from the PA Workforce Development Association.
Sponsor: BenchPrep
[9:17] – We’re grateful to BenchPrep for sponsoring the Leading Learning Podcast.
BenchPrep is an award-winning learning platform purpose-built to help learners feel confident and prepared to take difficult entrance, certification, and licensing tests by delivering an intuitive, efficient, and engaging study experience. BenchPrep helps you accelerate test prep revenue growth by offering the tools you need to create market-ready products and data to improve your content and understand learner behavior.
Many of the world’s leading associations, credentialing bodies, test providers, and training companies trust BenchPrep to power their online study programs, including ACT, the Association of American Medical Colleges, CFA Institute, CompTIA, GMAC, McGraw-Hill Education, AccessLex, and more. More than 8 million learners have used BenchPrep to attain academic and professional success. To discover more, visit BenchPrep.
Workforce Development Today and the Opportunity for Learning Businesses
The Great Resignation
[10:24] – When we think about jobs and the current moment, it’s hard not to think of the Great Resignation, AKA the Big Quit, AKA the Great Reconsideration. Whatever you call it, this is the fact that a record-breaking 47.4 million people quit their job in 2021. The pandemic has been a major driver of the quitting, but COVID is only part of the picture.
The Great Reconsideration being one of the names given to this phenomenon speaks to the revaluation that many workers are doing. We know low wages and new career goals have driven some of the resignations, and those are the kinds of things workforce development can help with. One of the big goals of workforce development is good jobs.
It’s not any job or jobs at any cost. The goal is good jobs, high-quality jobs…. The U.S. isn’t facing a labor shortage. The U.S. is facing a good jobs shortage.Celisa Steele
To learn more, check out “It’s a Good Jobs Shortage: The Real Reason So Many Workers Are Quitting” from the Center for American Progress.
Even before the pandemic, a lot of workers were dealing with low or stagnant wages, unpredictable schedules, and undesirable working conditions—and going without benefits like health care and paid family and medical leave. The number of good jobs has been on the decline for decades according to the U.S. Private Sector Job Quality Index (JQI).
Opinions on what constitutes a good, high-quality job differ. The Job Quality Index looks at money, specifically the weekly income a job generates for an employee. A September 2020 report from the Center for American Progress asserts that we need to go beyond income only. It asserts that good jobs are “the kind of jobs that afford economic security and participation in civic life as opposed to occupations that require few skills, pay low wages, or are vulnerable to outsourcing.”
Job quality should also consider worker safety, commute time, working environment, the right to unionize, and equal pay and discrimination.
Diversity, Equity, and Inclusion
[13:28] – There is a push for diversity, equity, and inclusion and a focus on DEI in workforce development. This is where the societal or community perspective of workforce development comes into play. Marina Zhavoronkova, a senior fellow for workforce development at the Center for American Progress has a February 2022 article that points to the fact that construction and other industries supported by the new bipartisan Infrastructure Investment and Jobs Act face labor shortages. That act allocates $1.2 trillion toward repairing the transportation system, ensuring access to clean water, connecting people to high-speed broadband, and more in the U.S. She asserts that workforce development systems can help narrow the labor shortage by supporting efforts to bring in women and workers of color.
To learn more, check out “Meeting the Moment: Equity and Job Quality in the Public Workforce Development System” from the Center for American Progress (also by Marina Zhavoronkova).
Another thing to note when looking at workforce development in the current moment is that workforce development plays a role not only in attracting and skilling new workers but in upskilling and retaining workers. And when the labor market is as tight as it is now, the retaining piece becomes as important and valuable as the attracting piece, especially to employers.Jeff Cobb
Partnering with Employers
[15:51] – Learning businesses should take the time and money to invest in finding and forming partnerships. If you’re going to be an education provider in the workforce development realm, then close alignment with employers in your industry or profession is essential.
That alignment with employers is part of your marketing. It is also a big part of how you know that the products and services you offer will be seen as valuable.
Also, if you can go beyond alignment with employers to actual partnerships, then you can make the design and development of new products less risky. Depending on the nature of the partnership, you may share design and development costs with an employer-partner, or you may pre-sell to that employer-partner, so you go into design and development knowing that you’ve got a B2B sale already guaranteed. Whether what you’re doing “counts” as workforce development or not, your work to align with or to partner with employers is valuable. It’ll be useful broadly in your learning business because it keeps you connected to the industry or profession you’re serving.
In our years of experience working with organizations, we haven’t seen enough intentional effort to communicate with employers, understand their needs, and use that as the basis for creating products that you know there is demand for out in the marketplace.
We recently spoke to Clare Marsch, SVP of training and development at the American Bankers Association (ABA), and she talked about how close ABA is to the banks, which are the employers in ABA’s field. ABA offers in-bank learning programs, and they work closely with their members to create programs. That keeps them aligned with employers, and that’s something that most trade and professional associations can do—make use of their members to get and stay connected with employers.
Clare also pointed out that one of the important things she and her team do is seed the market for future bankers. ABA has relationships with colleges, universities, community colleges, and educational organizations that spread knowledge of the banking industry to the next generation of workers. So ABA is both professionalizing current bankers and seeding the market for future bankers.
Given the benefits of partnering and working with employers, why it doesn’t happen more? At least part of the answer is likely related to the time and effort required. When we spoke to Lowell Aplebaum, he talked about the need for potential partners to come into discussions with open minds. The time, energy, and open-mindedness required for effective partnerships are barriers. But they’re worth taking on because the potential return is huge.
Reflection Questions
[20:48] – We invite and encourage you to take time to reflect.
- What role does your learning business play in attracting and educating workers new to the field, industry, or profession you serve?
- What role might your learning business play? There may be opportunities to revisit and refine what you offer or add new offerings.
- How clear are you on the needs of the employers in your field, industry, or profession?
- When was the last time you verified those employer needs? Make use of relevant existing research and/or conduct your own. (We recommend Georgetown University’s Center on Education and the Workforce as a good potential resource. Specifically, check out Workplace Basics: The Competencies Employers Want, which explores how 120 knowledge areas, skills, and abilities are demanded across the workforce and within specific occupations.)
- What are the implications and opportunities of recent and still unfolding trends, like automation and artificial intelligence? What do these trends mean related to skills gaps, upskilling, and reskilling? How might your learning business help?
[23:46] – Wrap-up
To make sure you don’t miss new episodes, we encourage you to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). Subscription numbers give us some visibility into the impact of the podcast.
We’d also be grateful if you would take a minute to rate us on Apple Podcasts at https://www.leadinglearning.com/apple. We personally appreciate reviews and ratings, and they help us show up when people search for content on leading a learning business.
Finally, consider following us and sharing the good word about Leading Learning. You can find us on Twitter, Facebook, and LinkedIn.
Episodes on Related Topics: | https://www.leadinglearning.com/episode-297-value-of-workforce-development-now/ |
Over the past year, the pandemic has affected all Malaysian especially to those who has lost their jobs. The recovery strategy outlined by the 6Rs (Resolve, Resilience, Restart, Recovery, Revitalise and Reform), business has been revitalised thus creating job opportunities.
As such, the Government has been helping to boost jobs to kickstart and revitalize the economy for the future. All in all, the government hopes to create 500,000 employment opportunities through the National Employment Council (NEC), with the milestones tracked closely by LAKSANA, the coordinating agency which monitors the government’s efforts to stimulate the economy post COVID-19.
PenjanaKerjaya2.0
PenjanaKerjaya was announced under the National Economic Recovery Plan in June 2020 with its RM2 billion investment is expected to provide some 250,000 employment opportunities in 2021. The objective is to promote the creation of quality jobs and reduce unemployment among locals.
Employers are given an additional incentive of 60% to encourage job opportunities for people with disabilities, those who have been unemployed and workers who have been retrenched. Furthermore, employers are given access to training courses worth up to RM7,000 as well as mobility assistance between RM500-RM1,000 depending on distance of home to workplace.
Reskilling and Upskilling (Program Peningkatan Kemahiran dan Latihan Semula)
RM1 billion has been allocated to benefit 200,000 beneficiaries to improve their skillset to be marketable to help prepare facing the challenges of the job market post pandemic.
RM150 million will be allocated for the Ministry of Higher Education professional certification (KPT-PACE) that will benefit 50,000 new graduates. The graduates will be offered vouchers worth RM3,000 each to pursue a professional certification course at public and private universities.
An allocation of RM100 million to the Human Resources Development Fund (HDRF) to implement training in collaboration with private sector employers while another RM100 million was also given to MDEC to facilitate the talent transfer of the existing workforce to meet the needs of the growing ICT industry.
Another RM100 million will be allocated to Iskandar Development Region Authority (IRDA) and Sabah Economic Development and Investment Authority (SEDIA) to equip new skills to help jobseekers who suffered severely due to the pandemic
Perbadanan Hal Ehwal Bekas Angkatan Tentera (PERHEBAT) will be getting RM30 million for the training programs for ex-servicemen of the Malaysian Armed Forces.
Short-Term Employment Programme (MySTEP)
As announced in the 2021 Budget, MySTEP is a RM700 million initiative creating 50,000 contract job opportunities in various public agencies as well as government-linked companies (GLC). A total of 35,000 job opportunities are provided in the public sector, meanwhile 15,000 job opportunities are also provided at GLCs as well as their strategic partners.
Interested applicants should be on the lookout for job related postings at Akademiga.com; Mykerjaya.com; and Graduan.com.
MyFutureJobs Portal
MyFutureJobs is the nation’s employment portal under the auspices of the Ministry of Human Resources and SOCSO. The portal is an interactive and integrated platform that guides employers’ step by step to screen candidates to ensure that Malaysians are given employment opportunities. Applications for foreign workers or expatriates will only be considered if there are no Malaysians interested in the positions.
Job seekers can look for a wide range of job vacancies stemming from various sectors. Approximately 67.3% of job vacancies on the platform are within the RM2,000 to RM2,499 range making it a perfect job-hunting avenue for amongst fresh graduates and those with below 3 years of job experience.
For more info: Myfuturejobs.gov.my
HRDF Placement Centre (HPC)
Human Resources Development Fund Placement Centre (HPC) aims to provide more than 50,000 jobs for Malaysians this year to address unemployment challenges in the midst of the Covid-19 pandemic. The portal is to assist in matching employers from wide range of industries and sectors to the right candidates.
Close to 20,000 job opportunities can be accessed via the portal through strategic cooperation with employee associations and career strategic partners. To reach the 50,000 job placements target, HPC will conduct road shows around the country to secure more openings by establishing more cooperation.
Interested? Visit: https://hpc.hrdf.com.my
With all these measures kicking in, we can expect to see unemployment soon coming down and the country returning to its pre-pandemic economic levels. Here’s hoping 2021 will be a slow yet gradual promising year for all of us. | https://kewanganrakyat.com/exploring-job-opportunities-during-the-ongoing-pandemic-we-have-got-you-covered-2/?lang=en |
Event date: This event has already passed.
Webinar Format
WHY: Retail, hospitality, tourism, and food and beverage service are some of the industries that were most heavily impacted by layoffs at the onset of COVID-19. As a result, the need for scalable models for upskilling and reskilling the frontline workforce is more important than ever.
WHAT: Join this interactive convening to engage in the discussion about equitable strategies for economic recovery and identify opportunities for upskilling, reskilling, and redeploying displaced workers that can be implemented to support the economic well-being of frontline workers in your community now and in the future.
WHO: The statewide convening will bring together leaders from some of Colorado’s most heavily impacted industries with partners in education and workforce to identify shared solutions for economic recovery for the industry and frontline workers. Representatives from businesses in these industries, frontline workers, and public partners and funders interested in supporting businesses and frontline workers are encouraged to attend.
Register for Reenvisioning Retail 2020 and to learn more!
Agenda and Meeting Links
To register in advance for multiple sessions and be added to the calendar invitations for the sessions, please register here. Please note that each session will be capped at 300 attendees and will be limited to the first 300 people to register for the session through the registration form.
Day 1: July 23
- 8:00 – 9:00 a.m. – Digital Espresso
- Opportunity for digital networking with event participants.
- 9:00 – 11:00 a.m. – Opening and Key Note – Equity in Retail
- Speakers: Dr. Nita Mosby Tyler, The Equity Project
- Advancing equity throughout the retail and hospitality industries is a competitive advantage for businesses and can open up opportunities for frontline workers. As the Reenvisioning Retail conference opens, Dr. Nita Mosby Tyler, the Chief Catalyst and Founder of The Equity Project, LLC. will ground attendees in an understanding of what equity is to frame discussion about an equitable recovery for businesses and frontline workers. Her witty storytelling will embrace misunderstandings and differences while encouraging self-reflection and actions we can all take towards eliminating disparities as both a competitive advantage for businesses and a moral imperative for frontline workers.
- 11:00 a.m. – 12:30 p.m. – Telling Our Story: Digital Live Scribing of Frontline Workers
- The COVID-19 crisis emphasized the critical role frontline workers play in Colorado and beyond. And yet, 24 million workers in the U.S. face little to no upward career mobility, leaving them vulnerable to displacement and with few options to increase their wages and career opportunities for themselves and their families.* New solutions are needed to accelerate skill development and economic mobility for frontline workers, but effective solutions must respond to the unique needs, barriers, and interests these workers face. This session will ground discussion on economic recovery in the lived experiences and expertise of frontline workers across Colorado.
- *A Guide to Upskilling America’s Frontline Workers, Aspen Institute (2015).
- 1:00 – 2:30 p.m. – Solution Salon #1 – Occupational Segregation and Wage Discrepancies
- Occupational segregation occurs when one demographic group is overrepresented or underrepresented among different kinds of work or different types of jobs. This interactive discussion will bring together industry representatives with experts in occupational segregation and wages to discuss challenges confronting retail, food and beverage, and hospitality and engage participants in identifying Colorado-specific solutions.
- 2:30 – 4:00 p.m. – Solution Salon #2 – Upskilling and Employee Investment
- For many companies, economic downturns trigger budget cuts and can lead to a departure from investments in the training and development of employees. Join this interactive solution salon to hear from state and national business leaders about their views on upskilling and employee investment, and discuss challenges and ideas for upskilling and investing in employees.
Day 2: July 24
- 8:00 – 9:00 a.m. – Digital Espresso
- Opportunity for digital networking with event participants.
- 9:00 – 10:30 a.m. – Opening and Key Note – Business Reinvention: The New Normal for America’s Independent Businesses.
- Speaker: Jon Schallert, The Schallert Group
- Lessons in business reinvention from owners who embrace change.
- 10:30 – 11:30 a.m. – The Future of Retail
- Even before the pandemic retail was changing. COVID-19 and subsequent stay at home orders have imposed new and increasingly complex challenges for the industry, creating a need to reenvision how the industry operates to ensure health, safety, and relevance in the age of disruption. This discussion will review projections on the future of retail and examine innovations for how the industry might reenvision itself.
- 11:30 a.m. – 12:30 p.m. – Health and Safety
- As businesses seek to reopen, safety for customers and workers is a top priority. Join this session for a discussion on the physical and mental health considerations for businesses and workers in the age of COVID-19. The panel will include a representative from the Colorado Department of Public Health and the Environment speaking on state level health and safety guidance.
- 1:30 p.m. – 2:30 p.m. – Small Business: Survive, Pivot, Thrive
- Speaker: Lindsey Vigoda, Small Business Majority
- Small businesses are the heart of Colorado’s economy. Coloradan’s have always had an instinct to support local, and now in the face of COVID-19, communities are continuing to rally and support their local small businesses — even when those businesses aren’t physically open. Join Lindsey Vigoda from the Small Business Majority in a discussion with small business owners from around Colorado about the innovative ways they are keeping their businesses viable and how they continue to support and invest in their workers.
- 2:30 p.m. – 3:30 p.m. – The Good Job Advantage
- Speakers: Jenny Weissbound, Aspen Institute; Sarah Kalloch, Good Jobs Institute; Jennifer Briggs, Reinventing Work Consultant/Colorado Employee Ownership Commission
- What is a good job? We’re not just talking about pay, or job duties. The ways you connect with employees and customers can have a big impact on your bottom line. Join this session to learn about operational strategies businesses have taken to increase retention and productivity. These practices can be small changes that make a big impact!
- 3:30 p.m. – 4:00 p.m. – Closing and Call to Action
- A recap of Reenvisioning Retail 2020 and guidance for engaging in next steps. | https://www.vailvalleypartnership.com/event/reenvisioning-retail/ |
There is a growing skills gap that is affecting the entire tech industry. According to the Bureau of Labor Statistics, job openings in the tech industry are projected to grow 13 percent from 2020 to 2030, faster than the average for all occupations. McKinsey says finding tech talent will be a "formidable challenge" in the years to come.
"Companies of all sizes around the globe are facing growing pressure to attract and retain skilled technologists. The Great Resignation has made tech employee attrition a boardroom-level discussion. With the need for tech jobs rapidly outpacing the number of qualified candidates, it's clear that hiring your way out of this talent crunch is not an option. Instead, organizations must invest their resources in upskilling and reskilling their current employees. As the data suggests, if you invest in your employees' growth and development, they are less likely to seek jobs elsewhere and will invest their skills into the key initiatives of your organization. Failure to do so increases the risks of accelerated attrition," said Gary Eimerman, General Manager of Pluralsight Skills.
Key Skills Gaps in 2022The study also finds that cybersecurity is the area with the largest skills gaps among respondents, replacing cloud computing as the top-ranking area of focus for individuals and organizations in 2022. Forty-three percent of respondents ranked cybersecurity as their top skill concern while 39% of respondents ranked cloud computing as their top skill concern. In 2021, 39% of respondents ranked both cybersecurity and cloud computing as their top skills gaps.
Similarly, 44% of respondents in the 2022 study listed cybersecurity as the skills gap that poses the greatest threat to their organization. Beyond cybersecurity and cloud computing, data storage (36%) and network infrastructure (28%) are some of the top skills gaps in 2022.
Tech Workers Eager to UpskillAccording to the survey, only 36% of all organizations are allocating dedicated work time for learning, with only 32% of tech companies designating work time for learning. The majority (91%) of respondents want to improve their tech skills to fulfill personal career goals. Additionally, 86% of respondents want their tech skills to align with their organization's overall strategy.
This data shows a misalignment between technologists' desire to hone their skills and organizations' willingness to dedicate time for upskilling. Technologists are eager to gain skills to make their organizations more successful. Business leaders must allocate resources and time to help technologists achieve their upskilling goals.
Impact of Upskilling on AttritionIn the midst of the Great Resignation, organizations must develop proactive strategies to ensure that their employees are receiving the learning and development tools they need. The study found that 52% of respondents consider leaving their job every month. Additionally, 40% of respondents say they are moving on from their current job due to lack of career growth opportunities. More than one third (37%) of respondents said that professional growth and learning are the top reasons to consider a new job, while 33% of workers say they are moving on from their current job due to lack of upskilling opportunities.
It's clear that technologists are hungry for growth opportunities and are seeking organizations that align with and support their upskilling goals. Organizations who invest their resources into upskilling their employees will see greater employee retention and satisfaction in the coming months.
Overall results of Pluralsight's State of Upskilling survey can be found here. Additionally, written analysis of the study can be downloaded here. For more information on how Pluralsight is helping enterprises upskill technologists in the most effective way possible, visit pluralsight.com/customer-stories. | https://vmblog.com/archive/2022/06/15/pluralsight-study-finds-48-of-tech-workers-consider-changing-jobs-due-to-lack-of-upskilling-resources.aspx |
Geneva [Switzerland], Jan 25 (ANI): Accelerated investment in upskilling and reskilling of workers can add at least 6.5 trillion dollars to global GDP, create 5.3 million new jobs by 2030 and help develop more inclusive and sustainable economies worldwide, according to a World Economic Forum report published on Monday.
The report titled 'Upskilling for Shared Prosperity' and authored in collaboration with PwC finds that accelerated skills enhancement will ensure that people have the experience and skills needed for the jobs created by the Fourth Industrial Revolution -- boosting global productivity by 3 per cent on average by 2030.
The newly created jobs will be those that are complemented and augmented -- rather than replaced -- by technology.
"Even before Covid-19, the rise of automation and digitisation was transforming global job markets, resulting in the very urgent need for large-scale upskilling and reskilling. Now, this need has become even more important," said Bob Moritz, Global Chairman of PwC.
"Upskilling is key to stimulating the economic recovery from Covid-19 and creating more inclusive and sustainable economies. To make this happen, greater public-private collaboration will be key," he said.
Saadia Zahidi, Managing Director of World Economic Forum, said millions of jobs have been lost through the pandemic while accelerating automation and digitisation mean that many are unlikely to return.
"We need new investments in the jobs of tomorrow, the skills people need for moving into these new roles and education systems that prepare young people for the new economy and society," she said. "There is no time to waste." | https://www.aninews.in/news/business/investment-in-upskilling-can-boost-global-gdp-by-65-trillion-by-2030-wef20210125152144/ |
GovCon Expert Chuck Brooks has published his latest article as a member of Executive Mosaic’s GovCon Expert program on Friday. Brooks discussed the federal government’s development of its cyber workforce, upskilling and reskilling federal workers as well as addressing the ongoing challenge of cybersecurity workforce requirements. You can read Chuck Brooks’ latest GovCon Expert article below:
Innovation is Strengthening the Federal Cybersecurity Workforce
By Chuck Brooks
Several years back, the federal government recognized that cybersecurity workforce training of a next generation of technicians and subject matter experts should be a priority. The government realized that the dearth of qualified cybersecurity workers combined with the expanding risk environment required more resources and investment. Thankfully, there have been significant progress in meeting the challenge of cybersecurity workforce from a variety of innovative programs.
The shortage of a trained cybersecurity workforce is a global problem. The firm Cybersecurity Ventures estimates there will be 3.5 million unfilled global cybersecurity jobs by 2021. The competition for filling those slots in the private sector among companies is fierce. It is even more of an issue for the public sector who cannot match the compensation packages paid by companies for top talent.
For most of the past decade the path to compete for quality cybersecurity workers has been based on the appeal of public service and mission. But clearly that was not enough to ensure recruitment and stability of programs so a new strategy had to be undertaken.
In July 2016, the White House issued a document “Strengthening the Federal Cybersecurity Workforce” that highlights a framework necessary to best recruit, train, and maintain a skilled federal cybersecurity workforce. These elements include:
1) Expand the Cybersecurity Workforce through Education and Training
2) Recruit the Nation’s Best Cyber Talent for Federal Service
3) Retain and Develop Highly Skilled Talent
4) Identify Cybersecurity Workforce needs.
The critical importance of continuing the strengthening of government cybersecurity was recently spelled out by Cyberspace Solarium Commission Co-Chairs Senator Angus King (I-ME) and Congressman Mike Gallagher (R-WI).
“Without talented cyber professionals working the keyboard, all the cutting-edge technology in the world cannot protect the United States in cyberspace. If we do not act now to ensure that our talented and experienced workforce continues to grow, we are leaving our country vulnerable to future cyber-attacks.”
Common cornerstones of the cybersecurity workforce enactments and mission are public/private partnering and collaboration, retooling and upskilling of employees, and identifying future workforce needs.
Public/Private Partnering and Collaboration
The White House “Strengthening the Federal Cybersecurity Workforce” document was a starting point and since then there has been many positive developments in investment, focus, and creativity. One area that was prioritized, was public/private cybersecurity cooperation for building and sharing the workforce.
A public/private collaborative effort has brought together by industry, academia, congress, and federal and state governments to establish working guidelines cultivate and train the next generation of cybersecurity technicians. This included a strategy establish incentives for public service such as paid education/free tuition, higher federal worker pay authority, and part–time employee rotational sharing arrangements between industry and government.
The Cybersecurity Talent Initiative by the Partnership for Public Service is a great example of a public-private partnership aimed at recruiting and training quality cybersecurity workforce. The program is a selective opportunity for students in cybersecurity-related fields to gain vital public and private sector work experience and even receive up to $75,000, inclusive of tax, in student loan assistance. The program is new and recently, The Partnership for Public Service announced Sept. 2nd that it had placed its first class of cyber-trained experts in federal service.
Retooling and Upskilling Employees
The Trump administration has made upskilling and reskilling the federal workforce a priority in its 2021 budget proposal. In a February 2020 press conference Margaret Weichert, Deputy Director for Management at the Office of Management and Budget (OMB), announced that the administration plans to focus on training and reskilling opportunities in the data science and analytics, cybersecurity, IT and project management fields. She stated that “we have made a firm commitment to reskilling up to and possibly exceeding 400,000 federal workers, focused on first and foremost jobs where we have a hard time filling roles.”
An early a working model in government for retooling and upskilling workers was the Department of Homeland Security‘s (DHS) Cybersecurity Veterans Hiring Pilot. The pilot was designed to build the department’s cyber workforce and enhance opportunities for veterans to continue to serve our country in cybersecurity.
In an article I wrote for Homeland Security Today, I suggested extending the proven veterans hiring working model to include cybersecurity program for Native Americans. “Investment by government, industry, and academia in training Native Americans in an accelerated cybersecurity curriculum combined with real-world experience via internships and fellowships would bring high dividends to cyber readiness down the line.
At the same time, it would bolster the nation’s pipeline for skilled digital workers. The further engagement of Native American tribal partners who have a strong, proven heritage of dedication and service to country will be a blessing to the future of homeland security.”
That veteran’s hiring model could also be expanded and enhanced to include outreach to economically depressed areas (utilizing HUB Zones) to create skilled workers for the federal cybersecurity workforce. An investment in training those in economically depressed areas in an accelerated cybersecurity curriculum — combined with real-world experience through internships and fellowships — would yield high dividends. At the same time. It would bolster the nation’s pipeline for skilled digital workers.
Retooling and cyber skill training efforts are not limited to the civilian side of government. The Quantum Leap program is a government initiative designed to “up-skill and re-skill” the Army’s IT and cyber force. Currently, the Army employs more than 15,000 IT professionals and The Quantum Leap program will re-code and re-skill close to 1,000 exiting IT positions by fiscal year 2023.
There are a variety of initiatives in government agencies to continue to retool workforces to address technical cybersecurity challenges and the new operating environment of our digitally connected world. The government should continue to invest in grant and fellowship programs that will support specialized employee training (in addition to their salaries) in cybersecurity research & development DHS, DoD, NASA, other federal agencies, the Intelligence Community, and The National Labs.
Identifying Future Workforce Requirements
Identifying cybersecurity workforce requirements is an ongoing challenge as the rapid assimilation of new technologies such as artificial intelligence and machine learning technology make it difficult for the public sector to keep up tech trends. The Cybersecurity Information Sharing Act of 2015 (CISA) directed DHS along with other agencies to identify cyber-related positions in the federal workforce. OMB (in consultation with DHS) was directed to produce a report identifying the critical workforce cyber needs across all federal agencies.
OMB is on the right track in identifying current gaps and it is reflected in annual agency cybersecurity scorecards. It is important to have transparency and accountability to ensure a forward looking “future ready” workforce that will be able to forecast and mitigate gaps before they arise.
Information sharing and collaboration are at the core of identifying and addressing workforce needs. The National Institute for Standards and Technology (NIST) has expanded the role and activities of the National Initiative for Cybersecurity Education (NICE). The mission of NICE is to energize and promote a robust network and an ecosystem of cybersecurity education, training, and workforce development. NICE has spearheaded conferences, events (on location and virtually), and has provided excellent resources that have made a great impact on the workforce ecosystem.
Government is on the right track in investing in the cybersecurity workforce and innovative programs are providing dividends. New ideas and solutions are continually needed in the important challenge to help us be more cyber-safe.
Public/private partnering and collaboration is essential to continue the momentum. Government and industry need to continue to work closely together to cultivate and train the next generation of cybersecurity technicians. Retooling and upskilling of employees can help mitigate cybersecurity workforce shortages and build new leadership expertise. Identifying cybersecurity workforce requirements will become even more of a necessity as digitally connectivity exponentially grows, and emerging cyber technologies permeate the evolving landscape.
Strengthening the federal government cybersecurity workforce will always be a process and mission.
GovCon Expert Chuck Brooks is a globally recognized thought leader and subject matter expert Cybersecurity and Emerging Technologies. He is President of Brooks Consulting International, a government relations and marketing firm focused on cybersecurity and emerging technologies. LinkedIn named Chuck as one of “The Top 5 Tech People to Follow on LinkedIn.”
He was named by Thompson Reuters as a “Top 50 Global Influencer in Risk, Compliance,” and by IFSEC as the “#2 Global Cybersecurity Influencer.” Chuck is a two-time Presidential appointee and has held leadership roles in several Fortune 500 companies. He is Adjunct Faculty at Georgetown University where he teaches courses in cybersecurity, homeland security, emerging tech, and applied intelligence. | https://www.govconwire.com/2020/09/govcon-expert-chuck-brooks-innovation-is-strengthening-the-federal-cybersecurity-workforce/ |
# Rare events
Rare or extreme events are events that occur with low frequency, and often refers to infrequent events that have widespread impact and which might destabilize systems (for example, stock markets, ocean wave intensity or optical fibers or society). Rare events encompass natural phenomena (major earthquakes, tsunamis, hurricanes, floods, asteroid impacts, solar flares, etc.), anthropogenic hazards (warfare and related forms of violent conflict, acts of terrorism, industrial accidents, financial and commodity market crashes, etc.), as well as phenomena for which natural and anthropogenic factors interact in complex ways (epidemic disease spread, global warming-related changes in climate and weather, etc.).
## Overview
Rare or extreme events are discrete occurrences of infrequently observed events. Despite being statistically improbable, such events are plausible insofar as historical instances of the event (or a similar event) have been documented. Scholarly and popular analyses of rare events often focus on those events that could be reasonably expected to have a substantial negative impact on a society—either economically or in terms of human casualties (typically, both). Examples of such events might include an 8.0+ Richter magnitude earthquake, a nuclear incident that kills thousands of people, or a 10%+ single-day change in the value of a stock market index.
## Modeling and analysis
Rare event modeling (REM) refers to efforts to characterize the statistical distribution parameters, generative processes, or dynamics that govern the occurrence of statistically rare events, including but not limited to high-impact natural or human-made catastrophes. Such “modeling” may include a wide range of approaches, including, most notably, statistical models derived from historical event data and computational software models that attempt to simulate rare event processes and dynamics. REM also encompasses efforts to forecast the occurrence of similar events over some future time horizon, which may be of interest for both scholarly and applied purposes (e.g., risk mitigation and planning).
## Relevant data sets
In many cases, rare and catastrophic events can be regarded as extreme-magnitude instances of more mundane phenomena. For example, seismic activity, stock market fluctuations, and acts of organized violence all occur along a continuum of extremity, with more extreme-magnitude cases being statistically infrequent. Therefore, rather than viewing rare event data as its own class of information, data concerning “rare” events often exists as a subset of data within a broader parent event class (e.g., a seismic activity data set would include instances of extreme earthquakes, as well as data on much lower-intensity seismic events).
The following is a list of data sets focusing on domains that are of broad scholarly and policy interest, and where “rare” (extreme-magnitude) cases may be of particularly keen interest due to their potentially devastating consequences. Descriptions of the data sets are extracted from the source websites or providers.
### Conflicts
Armed Conflict Database https://acd.iiss.org/ The Armed Conflict Database (ACD) monitors armed conflicts worldwide, focusing on political, military and humanitarian trends in current conflicts, whether they are local rebellions, long-term insurgencies, civil wars or inter-state conflicts. In addition to the comprehensive historical background for each conflict, the weekly timelines and the monthly updates, the statistics, data and reports in the ACD date back to 1997. Armed Conflict Location & Event Data Project http://www.acleddata.com/data/ The Armed Conflict data set covers events occurring in Africa from 1997 to present. This data set includes the event date, longitude, latitude, and fatality magnitude scale. Militarized Interstate Disputes https://web.archive.org/web/20141219135756/http://www.correlatesofwar.org/COW2%20Data/MIDs/MID40.html The Militarized Interstate Disputes (MID) data set “provides information about conflicts in which one or more states threaten, display, or use force against one or more other states between 1816 and 2010.” Political Instability Task Force (PITF) State Failure Problem Set, 1955–2013 http://www.systemicpeace.org/inscrdata.html The Political Instability Task Force (PITF), State Failure Problem Set is part of a larger armed conflict database produced by the Center for Systemic Peace from open source data. Data in PITF are available on various subsets: ethnic war, revolutionary war, adverse regime change, and genocide or politicide. Rand Database ofcon Worldwide Terrorism Incidents https://www.rand.org/nsrd/projects/terrorism-incidents.html The Rand Database of Worldwide Terrorism Incidents data set covers terrorism incidents worldwide from 1968 through 2009 but is not currently active. The data set includes a date, location (city, country), perpetrator, detailed description, and number of injuries and fatalities. Major Episodes of Political Violence http://www.systemicpeace.org/inscrdata.html The Major Episodes of Political Violence data set is part of a larger armed conflict database produced by the Center for Systemic Peace. Political Violence data include annual, cross-national, time-series data on interstate, societal, and communal warfare magnitude scores (independence, interstate, ethnic, and civil; violence and warfare) for all countries.
### Natural disasters
Advanced National Seismic System (ANSS) Comprehensive Earthquake Catalog (ComCat) https://earthquake.usgs.gov/earthquakes/search/ The ANSS Comprehensive Catalog (ComCat) contains earthquake source parameters (e.g. hypocenters, magnitudes, phase picks and amplitudes) and other products (e.g. moment tensor solutions, macroseismic information, tectonic summaries, maps) produced by contributing seismic networks. Dartmouth Flood Observatory http://floodobservatory.colorado.edu/ Dartmouth Flood Observatory uses “Space-based Measurement and Modeling of Surface Water” to track floods and uses news reporting to validate the results. This data set includes the country, start date, end date, affected square km, and cause of the flood. Additionally, this data set includes many magnitude scales, such as: dead, displaced, severity, damage, and flood magnitude. U.S. National Flood Insurance Program http://www.fema.gov/policy-claim-statistics-flood-insurance/policy-claim-statistics-flood-insurance/policy-claim-13 The U.S. National Flood Insurance Program data set contains a data table detailing flooding events with 1,500 or more paid losses from 1978 to the current month and year. The table includes the name and year of the event, the number of paid losses, the total amount paid and the average payment per loss. FAOSTAT (Famine) http://faostat.fao.org/ The FAOSTAT data set was developed by the Statistics Division of the Food and Agricultural Organization of the United Nations (FAO). It is an active, global data set that covers famine events from 1990–2013. Global Volcanism Program http://www.volcano.si.edu/search_eruption.cfm “Volcanoes of the World is a database describing the physical characteristics of volcanoes and their eruptions.” The data contain a start date, end date, volcano name (which can be used to look up the location) and VEI magnitude scale. International Disaster Database http://www.emdat.be/ EM-DAT contains essential core data on the occurrence and effects of over 18,000 mass disasters in the world from 1900 to present. The database is compiled from various sources, including UN agencies, non-governmental organizations, insurance companies, research institutes and press agencies. NOAA Natural Hazards http://www.ngdc.noaa.gov/hazard/ The Natural Hazards dataset is part of the National Geophysical Data Center run by the U.S. National Oceanic and Atmospheric Administration (NOAA). The National Geophysical Data Center archives and assimilates tsunami, earthquake and volcano data to support research, planning, response and mitigation. Long-term data, including photographs, can be used to establish the history of natural hazard occurrences and help mitigate against future events.
### Diseases
FluView http://gis.cdc.gov/grasp/fluview/fluportaldashboard.html FluView is produced by the U.S. Centers for Disease Control (CDC) and provides weekly influenza surveillance information in the United States by census area and includes the number of people tested and number of positive cases. Global Health Atlas The Global Health Atlas contains data on four communicable diseases: cholera, influenza, polio, and yellow fever. It is an active, global data set that covers number of cases and fatalities due to these infectious diseases.
### Others
Aviation Safety Database http://aviation-safety.net/database/ The Aviation Safety Database covers aviation safety incidents around the world. Every incident reports the location of the incident, the departing and arriving airports, number of fatalities and type of Airplane involved in the incident. Database of Radiological Incidents and Related Events http://www.johnstonsarchive.net/nuclear/radevents/ The Database of Radiological Incidents and Related Events covers events that resulted in acute radiation exposures to humans sufficient enough to cause casualties. The database includes the date, location, number of deaths, number of injuries and highest radiation dose recorded. Dow Jones Averages http://www.djaverages.com/?go=industrial-index-data Dow Jones Averages includes data and information on some of the world's most renowned and widely cited market indexes. Here you'll find rich historical data, robust analytical tools and exclusive educational content on the Dow Jones Industrial Average and a host of related indices. | https://en.wikipedia.org/wiki/Rare_events |
The primary mission of the Financial Trading Room is to provide an experiential learning environment in which students can observe the interaction of market concepts and behaviors through simulated trading and analyses. Based on the research and observations of faculty members across the country, it has been found that the immediacy of stock and global news information adds realism to the classroom.
Through team-based and individual study, students gain a greater understanding of how financial markets respond to new information. As students experience the impact of global currency fluctuations, variances in govern-mental policy, and corporate and individual decision-making, the sense of a "global economy" expands beyond that of a textbook discussion to one that is more attuned with the way 21st century students learn.
From a research perspective, the Financial Trading Room offers broad opportunities for faculty and student collaborations. The ability to quickly access and assimilate current and historical financial data significantly supports quantitative faculty research, while also providing learning opportunities for students who assist in extracting the required information. Further, access to commodity and financial data creates opportunities for interdisciplinary collaborations at the undergraduate and graduate levels.
The Trading Room is located in Room 213 Craig Hall and will accommodate a maximum class-size of 32 students.
FactSet provides information on thousands of public and private companies and private equity firms, including company descriptions, business segments size, market performance, summary financial, events, deal history, ownership,private equity holdings, and key executives.
FactSet Global Historical Information provides 20 years of historical prices, financial, ratios, filings, consensus and detail estimates.
FactSet Corporate Ownership gives institutions, hedge funds, and insider holdings, historical share movements, peer reports, ownership statistics, contact details, and investor regions analysis.
Financial Trading System (FTS) real-time enabling the study of risk management with customizable analytics. FTS Interactive Markets create a training platform within the classroom. Provides innovative teaching modules for company valuation and price discovery.
Bloomberg Terminal The Bloomberg Terminal brings together real-time data on every market, breaking news, in-depth research, powerful analytics, communications tools and world-class execution capabilities in one fully integrated solution. | https://www.ncat.edu/cobe/tradrm/index.html |
The Propagation of Regional Shocks in Housing Markets: Evidence from Oil Price Shocks in CanadaShocks to the demand for housing that originate in one region may seem important only for that regional housing market. We provide evidence that such shocks can also affect housing markets in other regions. Our analysis focuses on the response of Canadian housing markets to oil price shocks.
-
Did U.S. Consumers Respond to the 2014–2015 Oil Price Shock? Evidence from the Consumer Expenditure SurveyThe impact of oil price shocks on the U.S. economy is a topic of considerable debate. In this paper, we examine the response of U.S. consumers to the 2014–2015 negative oil price shock using representative survey data from the Consumer Expenditure Survey.
-
Modeling Fluctuations in the Global Demand for CommoditiesIt is widely understood that the real price of globally traded commodities is determined by the forces of demand and supply. One of the main determinants of the real price of commodities is shifts in the demand for commodities associated with unexpected fluctuations in global real economic activity.
-
Is the Discretionary Income Effect of Oil Price Shocks a Hoax?The transmission of oil price shocks has been a question of central interest in macroeconomics since the 1970s. There has been renewed interest in this question after the large and persistent fall in the real price of oil in 2014–16. In the context of this debate, Ramey (2017) makes the striking claim that the existing literature on the transmission of oil price shocks is fundamentally confused about the question of how to quantify the effect of oil price shocks.
-
On the Tail Risk Premium in the Oil MarketThis paper shows that changes in market participants’ fear of rare events implied by crude oil options contribute to oil price volatility and oil return predictability. Using 25 years of historical data, we document economically large tail risk premia that vary substantially over time and significantly forecast crude oil futures and spot returns.
-
November 16, 2017
Factors Behind the 2014 Oil Price DeclineOil prices have declined sharply over the past three years. While both supply and demand factors played a role in the large oil price decline of 2014, global supply growth seems to have been the predominant force. The most important drivers were likely the surprising growth of US shale oil production, the output decisions of the Organization of the Petro-leum Exporting Countries and the weaker-than-expected global growth that followed the 2009 global financial crisis.
-
The Global Benefits of Low Oil Prices: More Than Meets the EyeBetween mid-2014 and early 2016, oil prices fell by roughly 65 per cent. This note documents the channels through which this oil price decline is expected to affect the global economy. One important and immediate channel is through higher expenditures, especially in net oil-importing countries.
-
Low for Longer? Why the Global Oil Market in 2014 Is Not Like 1986In the second half of 2014, oil prices experienced a sharp decline, falling more than 50 per cent between June 2014 and January 2015. A cursory glance at this oil price crash suggests similarities to developments in 1986, when the price of oil declined by more than 50 per cent, initiating an episode of relatively low oil prices that lasted for more than a decade.
-
Crude Oil Prices and Fixed-Asset Cash Spending in the Oil and Gas Industry: Findings from VAR ModelsThis note investigates the relationship between crude oil prices and investment in the energy sector. We employ a set of vector autoregression (VAR) models (unconstrained VAR, vector error-correction and Bayesian VAR) to formalize the relationship between the West Texas Intermediate (WTI) benchmark and fixed-asset cash spending in the oil and gas extraction and support activities sector of the Canadian economy.
-
A General Approach to Recovering Market Expectations from Futures Prices with an Application to Crude OilFutures markets are a potentially valuable source of information about price expectations. Exploiting this information has proved difficult in practice, because time-varying risk premia often render the futures price a poor measure of the market expectation of the price of the underlying asset. | https://www.bankofcanada.ca/jel/q/q4/q43/ |
This website includes certain statements that constitute “forward-looking information” within the meaning of applicable securities law, including without limitation, the Company’s plans for its properties/projects, other statements relating to the technical, financial and business prospects of the Company, completing phase 1B hydrometallurgical study, initiation of flotation pilot plant processing, update mineral resource estimate, additional drilling, start baseline environmental studies, and other matters.
Forward-looking statements address future events and conditions and are necessarily based upon a number of estimates and assumptions. These statements relate to analyses and other information that are based on forecasts of future results, estimates of amounts not yet determinable and assumptions of management. Any statements that express or involve discussions with respect to predictions, expectations, beliefs, plans, projections, objectives, assumptions or future events or performance (often, but not always, using words or phrases such as “expects” or “does not expect”, “is expected”, “anticipates” or “does not anticipate”, “plans”, “estimates” or “intends”, or stating that certain actions, events or results “may”, “could”, “would”, “might” or “will” be taken, occur or be achieved), and variations of such words, and similar expressions are not statements of historical fact and may be forward-looking statements. Forward-looking statement are necessarily based upon a number of factors that, if untrue, could cause the actual results, performances or achievements of the Company to be materially different from future results, performances or achievements express or implied by such statements. Such statements and information are based on numerous assumptions regarding present and future business strategies and the environment in which the Company will operate in the future, including the price of metals, anticipated costs and the ability to achieve goals, that general business and economic conditions will not change in a material adverse manner, that financing will be available if and when needed and on reasonable terms, and that third party contractors, equipment and supplies and governmental and other approvals required to conduct the Company’s planned exploration activities will be available on reasonable terms and in a timely manner. While such estimates and assumptions are considered reasonable by the management of the Company, they are inherently subject to significant business, economic, competitive and regulatory uncertainties and risks.
Forward-looking statements are subject to a variety of risks and uncertainties, which could cause actual events, level of activity, performance or results to differ materially from those reflected in the forward-looking statements, including, without limitation: (i) risks related to copper, uranium, rare earth elements, and other commodity price fluctuations; (ii) risks and uncertainties relating to the interpretation of exploration results; (iii) risks related to the inherent uncertainty of exploration and cost estimates and the potential for unexpected costs and expenses; (iv) that resource exploration and development is a speculative business; (v) that the Company may lose or abandon its property interests or may fail to receive necessary licenses and permits; (vi) that environmental laws and regulations may become more onerous; (vii) that the Company may not be able to raise additional funds when necessary; (viii) the possibility that future exploration, development or mining results will not be consistent with the Company’s expectations; (ix) exploration and development risks, including risks related to accidents, equipment breakdowns, labour disputes or other unanticipated difficulties with or interruptions in exploration and development; (x) competition; (xi) the potential for delays in exploration or development activities or the completion of geologic reports or studies; (xii) the uncertainty of profitability based upon the Company’s history of losses; (xiii) risks related to environmental regulation and liability; (xiv) risks associated with failure to maintain community acceptance, agreements and permissions (generally referred to as “social licence”), including local First Nations; (xv) risks relating to obtaining and maintaining all necessary government permits, approvals and authorizations relating to the continued exploration and development of the Company’s projects; (xvi) risks related to the outcome of legal actions; (xvii) political and regulatory risks associated with mining and exploration; (xix) risks related to current global financial conditions; and (xx) other risks and uncertainties related to the Company’s prospects, properties and business strategy. These risks, as well as others, could cause actual results and events to vary significantly.
Factors that could cause actual results to differ materially from those in forward looking statements include, but are not limited to, continued availability of capital and financing and general economic, market or business conditions, the loss of key directors, employees, advisors or consultants, adverse weather conditions, increase in costs, equipment failures, litigation, failure of counterparties to perform their contractual obligations and fees charged by service providers. Investors are cautioned that forward-looking statements are not guarantees of future performance or events and, accordingly are cautioned not to put undue reliance on forward-looking statements due to the inherent uncertainty of such statements. The forward-looking statements included in this website are made as of the date hereof and the Company disclaims any intention or obligation to update or revise any forward-looking statements, whether as a result of new information, future events or otherwise, except as expressly required by applicable securities legislation.
Scientific and Technical Disclosure
The scientific and technical information contained in this website has been reviewed and approved by Kristopher J. Raffle, P.Geo. (BC) Principal and Consultant of APEX Geoscience Ltd. of Edmonton, AB, who is a Qualified Person as defined under National Instrument 43-101—Standards of Disclosure for Mineral Projects.
Forward Looking Statements or Information Related to Exploration:
Relating to exploration, the identification of exploration targets and any implied future investigation of such targets on the basis of specific geological, geochemical and geophysical evidence or trends are future-looking and subject to a variety of possible outcomes which may or may not include the discovery, or extension, or termination of mineralization. Further, areas around known mineralized intersections or surface showings may be marked by wording such as “open”, “untested”, “possible extension” or “exploration potential” or by symbols such as “?”. Such wording or symbols should not be construed as a certainty that mineralization continues or that the character of mineralization (e.g. grade or thickness) will remain consistent from a known and measured data point. The key risks related to exploration in general are that chances of identifying economical reserves are extremely small.
The technical content of this website have been reviewed and approved by Kris Raffle, P.Geo., a Director of the Company and a Qualified Person as defined by National Instrument 43-101.
Market & Industry Data
The information contained herein includes market and industry data that has been obtained from third party sources, including industry publications. The Company believes that its industry data is accurate and that its estimates and assumptions are reasonable, but there is no assurance as to the accuracy or completeness of this data. Third party sources generally state that the information contained therein has been obtained from sources believed to be reliable, but there is no assurance as to the accuracy or completeness of included information. Although the data is believed to be reliable, the Company has not independently verified any of the data from third party sources referred to in this website or ascertained the underlying economic assumptions relied upon by such sources.
Not for Distribution; No Offering
This is for information purposes only and may not be reproduced or distributed to any other person or published, in whole or part, for any purpose whatsoever. This does not constitute a general advertisement or general solicitation or an offer to sell or a solicitation to buy any securities in any jurisdiction. Such an offer can only be made by prospectus or other authorized offering document. This website and materials or fact of their distribution or communication shall not form the basis of, or be relied on in connection with any contract, commitment or investment decision whatsoever in relation thereto. No securities commission or similar authority in Canada or any other jurisdiction has in any way passed upon the adequacy or accuracy of the information contained herein. You should not rely upon this document in evaluating the merits of investing in our securities or for understanding our business. | https://defensemetals.com/disclaimer/ |
Synopsis: How Correlated Weather Fluctuations Take Down Power Grids
Intermittent power generation from renewable sources such as wind and solar may test the reliability of a power grid in ways that aren’t fully understood. Now, Tommaso Nesti of the National Research Institute for Mathematics and Computer Science, Netherlands, and colleagues have drawn on concepts from statistical physics to predict how such power grids might respond to randomly fluctuating power injections.
The researchers’ model of power grids includes information about power line capacity, network topology, and historical data on power loads and generation. They predicted potential failures in the network using large deviations theory, a mathematical framework for analyzing rare events and the way they occur. This analysis created a ranking of which lines are most likely to fail for a given set of grid operating parameters and weather conditions.
To validate their theoretical findings, the team used real data from a power transmission network in Germany. The researchers first estimated the correlations of power injections resulting from weather fluctuations. They then used these correlations to identify the power production pattern that most likely leads to the failure of any given line. One key finding is that failures don’t necessarily result from large fluctuations in nearby power injections but rather from “summing up” many smaller unusual fluctuations spread across the network. They also found that failures can propagate in nonobvious ways. An initial line failure can cause stress and possibly the failure of other lines, even those far from the original point of failure, depending on the layout of the network and weather-induced correlations.
This research is published in Physical Review Letters.
–Christopher Crockett
Christopher Crockett is a freelance writer based in Montgomery, Alabama.
Emergent Failures and Cascades in Power Grids: A Statistical Physics Perspective
Tommaso Nesti, Alessandro Zocca, and Bert Zwart
Phys. Rev. Lett. 120, 258301 (2018)
Published June 21, 2018
Features
Arts & Culture: Turbulence in The Starry Night
Researchers analyzing Vincent van Gogh’s The Starry Night show that its swirling structures have turbulent properties matching those observed in the molecular clouds that give birth to stars.
Explaining Bursts of Attention on Social Media
Network theorists pinpoint the factors that help drive spiky Twitter activity during major events like the discovery of gravitational waves.
Seeing into the Moon’s Dark Patches
A team of graduate students working at the Lunar and Planetary Institute in Texas revives the technique of “boulder tracking” to explore the Moon’s unlit craters.
Announcements
Introducing Physical Review Research
The new open access journal will welcome the full spectrum of research of interest to the physics community.
Nobel Prize in Physics 2018
Prize recognizes Arthur Ashkin, Gérard Mourou, and Donna Strickland for developing laser tools that have led to new biophysics experiments and medical technologies.
Where Are They Now?
For our ten-year anniversary, the editors of Physics look back at some of the past research we have covered and ask: What’s become of it? | https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.120.258301 |
Hamilton, Bermuda, September 5, 2014
Press release from Nordic American Offshore Ltd.
Nordic American Offshore Ltd (NAO) has entered into two new contracts with a major oil-company in the UK for one year with a one year option.
This will secure employment for two of our vessels up to the end of 2015.
NAO has been in operation since end of 2013. It has turned out to be a solid investment for its shareholders, including Nordic American Tankers Limited (NAT) holding 17.1%. NAT has a realized and unrealized profit of about $20 million on its investment in NAO.
CAUTIONARY STATEMENT REGARDING FORWARD-LOOKING STATEMENTS
Matters discussed in this press release may constitute forward-looking statements. The Private Securities Litigation Reform Act of 1995 provides safe harbor protections for forward-looking statements in order to encourage companies to provide prospective information about their business. Forward-looking statements include statements concerning plans, objectives, goals, strategies, future events or performance, and underlying assumptions and other statements, which are other than statements of historical facts.
The Company desires to take advantage of the safe harbor provisions of the Private Securities Litigation Reform Act of 1995 and is including this cautionary statement in connection with this safe harbor legislation. The words “believe,” “anticipate,” “intend,” “estimate,” “forecast,” “project,” “plan,” “potential,” “may,” “should,” “expect,” “pending” and similar expressions identify forward-looking statements.
The forward-looking statements in this press release are based upon various assumptions, many of which are based, in turn, upon further assumptions, including without limitation, our management’s examination of historical operating trends, data contained in our records and other data available from third parties. Although we believe that these assumptions were reasonable when made, because these assumptions are inherently subject to significant uncertainties and contingencies which are difficult or impossible to predict and are beyond our control, we cannot assure you that we will achieve or accomplish these expectations, beliefs or projections. We undertake no obligation to update any forward-looking statement, whether as a result of new information, future events or otherwise.
Important factors that, in our view, could cause actual results to differ materially from those discussed in the forward-looking statements include the strength of world economies and currencies, general market conditions, including fluctuations in charter rates and vessel values, changes in demand in the PSV market, as a result of changes in the general market conditions of the oil and natural gas industry which influence charter hire rates and vessel values, demand in platform supply vessels, our operating expenses, including bunker prices, dry docking and insurance costs, governmental rules and regulations or actions taken by regulatory authorities as well as potential liability from pending or future litigation, general domestic and international political conditions, potential disruption of shipping routes due to accidents or political events, the availability of financing and refinancing, vessel breakdowns and instances of off-hire and other important factors described from time to time in the reports filed by the Company with the Securities and Exchange Commission.
Contacts:
| Tor-Øyvind Bjørkli, Chief Executive Officer
|
Nordic American Offshore Ltd.
Tel: +47 21 99 24 81 or +47 90 62 70 14
| Jacob Ellefsen, Manager, IR and Research
|
Nordic American Offshore Ltd.
Tel: +33 678 631 959 or + 377 93 25 89 07
| Herbjørn Hansson, Executive Chairman
|
Nordic American Offshore Ltd. | https://www.nat.bm/nordic-american-offshore-ltd-nysenao-two-new-contracts-through-2015/ |
While Coronavirus and its resultant economic meltdown are on everyone’s mind these days, there is also a menacing development that will affect every living soul on the planet, and no one is even talking about it.
Its current manifestation is currently related to the COVID-19 epidemic whose causes are rooted in the last financial meltdown, the 2008-09 Global Financial Crisis. It stems from the government’s reaction to the GFC and how the failure to effectively deal with the underlying structural issues will result as a major factor in the financial collapse of an economic alliance that was established over 30 years ago.
For a certain reason, it's the most hush-hush topic in financial journalism today.
What is the Black Swan Event?
A black swan is an extremely unpredictable and rare event with severe consequences. It can cause catastrophic damage to the economy by negatively impacting markets and investment, but even the use of the most robust modelling cannot prevent a black swan.
The term was hyped by Nassim Nicholas Taleb, a writer, and former Wall Street trader. He wrote about the idea of a black swan event in a 2007 book before the GFC. Taleb argued that as black swan events are impossible to predict due to their extreme rarity, yet have catastrophic consequences, it’s important for people to assume that it is always possible.
The theory was developed to throw light on:
- The out of proportion role of high impact, hard to predict, and rare events that are beyond the realm of normal expectations in history, finance, and technology.
- The indeterminable probability of rare consequential events using scientific methods
- The psychological biases that make human beings individually and collectively blind to unprecedented situations and unaware of the massive role of rare events.
Historical Black Swan events
1. Asian Currency Crisis
This started in 1997 and struck most of East Asia, including Korea, China, and Thailand. In 15 months, the 30-share index Sensex tumbled 38%, and to 8 months to recover from the bear market.
Source: Economicshelp
2. Dotcom tech bubble
In the year 2000, the dot com tech bubble busted with a domino effect globally. The tech-heavy NASDAQ index rose from 1000 in 1995 and rose to 5048 in 2000 before the mayhem and the S&P BSE Sensex sunk to 57% from its peak in 14 months. It took nearly 2 years for the markets to recover.
Source: WSI Market Data Group
3. Crash of 9/11
Terrorist attacks are some of the most unpredictable events. The New York City attack of September 11, 2001, that nobody foresaw had a market reaction which was predictable as the economy was still healing from the dot com crash. There was closure put in place to avoid market chaos. The week ended with US multinational Dow posting -14% and S&P down by -11.6%. A market cap of $1.4 Mn was wiped out from books in a week.
Source: The New York Times
4. Global Financial Crisis
The most infamous financial meltdown started in September 2008. The biggest catalyst was the subprime mortgages that were doled out to people with credit, leading to the housing bubble which busted. This happened due to the boom in the economy and high lending.
Source: The Balance
5. Sovereign Debt Crisis
The basic reason for this crisis was the enormous amount of public debt that these countries had, and they were unable to pay it off.
Source: Google
Is 2020 the year of Black Swan?
The coronavirus pandemic, which has caused 1,00,000+ deaths worldwide is an event that has hit the economies globally. A Black swan event is basically unpredictable, but it was unprecedented only in the case of China, other countries had already seen it coming. For example, India had the chance to take steps before COVID could enter it. It was due to structural defaults that India’s economy was hit badly.
As this event was foreseeable, we can place it in the category of the white swan. Taleb wrote about COVID in an essay: “A global pandemic is a white swan – an event that is certain to occur at some point. Such pandemics are inevitable; they come as a result of the structure of the modern world, and their economic consequences will be even more serious as a result of increasing interconnectedness and exaggerated optimization."
Pandemics have reoccurred throughout history, which is why white swan events are needed to be prepared for.
How to handle the Black Swan Event?
How to invest
One of the most difficult problems an investor faces in handling the swan crisis is duration blindness. As there is no certainty for such events, it is difficult to estimate the time of crisis.
The investor can use various estimates from experts and still make room for errors in forecasts. It is useful to stagger the investment over a significant period. The selection of an investor's time frame is subjective. It is better to invest for a long time to accommodate a margin of safety.
What to invest in
During a market crash, the frothy valuations get eliminated. Investors are allowed to choose from a wide range of businesses that have to come to attractive levels. To create a robust portfolio, one must follow basic rules:
- Stay inside the circle of competence: In these markets, ’ investors run a screener to check which stocks have corrected the most in the fall. Being anchored at all-time high prices, one can be tempted to dive into businesses that one may not have tracked priorly. Not knowing about what you are buying can make you vulnerable to losses. One must always stay within the limits of their knowledge.
- Avoid companies with high financial leverage: Leveraged companies often run into liquidity troubles. These businesses have to survive the downturn to make money for the investor. Also, where quality companies are available at attractive valuations there needs to be a high payoff to move into riskier stocks. Unless, the earning trajectory is visible fairly, playing with high debt companies must be avoided.
- Buy robust business models: During a bull market, fragile business or negative cash flow profiles can raise easy money. After a crash, an investor who bought those stocks will shift to companies which have a robust business model and strong balance sheet. Healthy cash flows and RoCE, scalability, brand franchise, and good leadership are important characteristics for choosing the right company.
Conclusion
To sum up, Black swans are unexpected and high impact events. It is nearly impossible to predict a black swan event. However, an investor can control black swan proof portfolios. By understanding the fundamentals and behaviour of the market crash, investors can take advantage of these events. It is highly important to
- Be humble and stagger investments over a reasonable period.
- Be attentive in picking up good quality stocks.
In the end, it all comes down to how prepared one is before the disaster strikes, which it certainly will. And as we say, “Prevention is better than cure.”
How was this article?
Like, comment or share. | https://insider.finology.in/stock-market/black-swan-event |
In standard voting procedures, random audits are one method for increasing election integrity. In the case of cryptographic (or end-to-end) election verification, random challenges are often used to establish that the tally was computed correctly. In both cases, a source of randomness is required. In two recent binding cryptographic elections, this randomness was drawn from stock market data. This approach allows anyone with access to financial data to verify the challenges were generated correctly and, assuming market fluctuations are unpredictable to some degree, the challenges were generated at the correct time. However the degree to which these fluctuations are unpredictable is not known to be sufficient for generating a fair and unpredictable challenge. In this paper, we use tools from computational finance to provide an estimate of the amount of entropy in the closing price of a stock. We estimate that for each of the 30 stocks in the Dow Jones industrial average, the entropy is between 6 and 9 bits per trading day. We then propose a straight-forward protocol for regularly publishing verifiable 128-bit random seeds with entropy harvested over time from stock prices. These "beacons" can be used as challenges directly, or as a seed to a deterministic pseudorandom generator for creating larger challenges. | https://eprint.iacr.org/2010/361 |
Central bank action the past two weeks has been mixed for gold. The US Fed delayed lifting rates due to softer economic data (gold bullish) but the possibility of a rate hike in September still cannot be ruled out (bearish). The ECB did not expand QE (gold neutral, maybe bearish). The BoJ commented on the adverse consequences of a negative interest rate policy (gold bearish). Overall in the past few weeks there is a rising sentiment that the global central banks are not as aggressive as the market was hoping for and is starting to trade with a mini “taper tantrum”-like sentiment. Gold bullion, post Brexit, has been consolidating within an ABC corrective range with $1300 as firm support. Despite the back up in rates and currencies, gold bullion’s price action remains corrective. Gold closed this week at $1328 (+0.2%). We see $1300 as a key support level for bullion. Gold equities as measured by GDX corrected down to the $25 support level and re-traced about half of the decline before falling again on Friday’s mass sell-off. GDX for week closed at $26.41 (-3.4%). We see $25 as key support for GDX and GDX to more volatile and vulnerable than gold bullion in the next few weeks.
It was a rough week for many asset classes. After months of some of the lowest sustained realized volatility in market history, the markets broke out of its complacency on Friday. The drivers of this sell off are once again the volatility control type funds (ie. risk parity funds, volatility funds, CTA’s, etc.). A few months ago post Brexit, volatility fell and these funds began to increase their position sizes. However once volatility picks up, they are programmed to start reducing their positions through trading algorithms and their position sizes are enormous. They generally do not scale in or scale out. These volatility control type funds are estimated to be $0.8 to $1.0 trillion AUM in size. A few years ago correlation patterns were considered an interesting factoid and did not have any real market implications. Today with the proliferation of these types of vol-control funds they have enormous market impact.
As a reminder the market sell-off back in Aug and Sept of 2015 was created by these funds. The back up in rates that occurred on Friday started when the ECB decided not to expand its QE program further. This caused German yields to back up and US yields followed. Normally equities and bonds have a 30 day average correlation around -0.7. Recently this correlation has been +.35, the highest in the post 2008 world. Typically if equities sell off, bond yields would fall and gold would rise (gold as a safe haven). In this unusually extreme dislocation, most assets (not just gold) will likely see price movements that may appear “unexplainable”. For gold we expect this correlation-driven selling to be short lived (historically this has been the case, see chart below). As a reminder, the long term macro forces of this gold bull market have not changed. We are still in a negative interest rate environment. Once the equity/bond correlation reverts back to its normal range of -0.7 we would expect a significant rebound rally in gold equities in and around this correlation reversion. In the meantime volatility will be very high and difficult to predict. We mentioned a few months ago that post Brexit could see a change in correlation patterns, we just didn’t know until now.
The Gold Team
Paul Wong, Jason Mayer, Maria Smirnova, and Shree Kargutkar
Sprott Asset Management LP
This is a chart of the rolling 30 day correlation between SPY and TLT. Periods of positive correlations are very rare and typically do not last very long. The 2013 period was the “Taper Tantrum”.
|
|
|
|
This information is for information purposes only and is not intended to be an offer or solicitation for the sale of any financial product or service or a recommendation or determination by Sprott Global Resource Investments Ltd. that any investment strategy is suitable for a specific investor. Investors should seek financial advice regarding the suitability of any investment strategy based on the objectives of the investor, financial situation, investment horizon, and their particular needs. This information is not intended to provide financial, tax, legal, accounting or other professional advice since such advice always requires consideration of individual circumstances. The products discussed herein are not insured by the FDIC or any other governmental agency, are subject to risks, including a possible loss of the principal amount invested. Generally, natural resources investments are more volatile on a daily basis and have higher headline risk than other sectors as they tend to be more sensitive to economic data, political and regulatory events as well as underlying commodity prices. Natural resource investments are influenced by the price of underlying commodities like oil, gas, metals, coal, etc.; several of which trade on various exchanges and have price fluctuations based on short-term dynamics partly driven by demand/supply and nowadays also by investment flows. Natural resource investments tend to react more sensitively to global events and economic data than other sectors, whether it is a natural disaster like an earthquake, political upheaval in the Middle East or release of employment data in the U.S. Low priced securities can be very risky and may result in the loss of part or all of your investment. Because of significant volatility, large dealer spreads and very limited market liquidity, typically you will not be able to sell a low priced security immediately back to the dealer at the same price it sold the stock to you. In some cases, the stock may fall quickly in value. Investing in foreign markets may entail greater risks than those normally associated with domestic markets, such as political, currency, economic and market risks. You should carefully consider whether trading in low priced and international securities is suitable for you in light of your circumstances and financial resources. Past performance is no guarantee of future returns. Sprott Global, entities that it controls, family, friends, employees, associates, and others may hold positions in the securities it recommends to clients, and may sell the same at any time. | http://rockstone-research.de/index.php/en/markets-commodities/1805-Sprott-Gold-Commentary |
Hamilton, Bermuda, June 13, 2018
To shareholders and investors,
We have previously commented upon the relationships NAT has with important major customers.
We would now like to inform you that one of our three Samsung suezmax newbuildings, scheduled for delivery in August 2018, has obtained a TC contract. We have entered into a 3-year fixed time charter contract with the first-class charterer, Equinor (formerly; Statoil) of Norway.
The contract is expected to commence in the autumn of 2018. Over the years, NAT has taken delivery of several newbuildings from Samsung Shipyard in South Korea. The contract has a base rate of $21,000 per day, producing positive cashflow and earnings. The time charter includes two optional periods that could extend the TC contract into 2023.
Major oil and gas companies, including oil traders both in the West and the East, are prioritized customer groups for NAT.
Going forward, we sense an upward trend for the tanker industry as there is a clear expectation for improvement. NAT is in a positive phase of development. Over the last two weeks, 5 suezmax tankers of 20 years or more have been sold. These transactions generate a total cashflow of about $50 million.
CAUTIONARY STATEMENT REGARDING FORWARD-LOOKING STATEMENTS
Matters discussed in this press release may constitute forward-looking statements. The Private Securities Litigation Reform Act of 1995 provides safe harbor protections for forward-looking statements in order to encourage companies to provide prospective information about their business. Forward-looking statements include statements concerning plans, objectives, goals, strategies, future events or performance, and underlying assumptions and other statements, which are other than statements of historical facts.
The Company desires to take advantage of the safe harbor provisions of the Private Securities Litigation Reform Act of 1995 and is including this cautionary statement in connection with this safe harbor legislation. The words “believe,” “anticipate,” “intend,” “estimate,” “forecast,” “project,” “plan,” “potential,” “will,” “may,” “should,” “expect,” “pending” and similar expressions identify forward-looking statements.
The forward-looking statements in this press release are based upon various assumptions, many of which are based, in turn, upon further assumptions, including without limitation, our management’s examination of historical operating trends, data contained in our records and other data available from third parties. Although we believe that these assumptions were reasonable when made, because these assumptions are inherently subject to significant uncertainties and contingencies which are difficult or impossible to predict and are beyond our control, we cannot assure you that we will achieve or accomplish these expectations, beliefs or projections. We undertake no obligation to update any forward-looking statement, whether as a result of new information, future events or otherwise.
Important factors that, in our view, could cause actual results to differ materially from those discussed in the forward-looking statements include the strength of world economies and currencies, general market conditions, including fluctuations in charter rates and vessel values, changes in demand in the tanker market, as a result of changes in OPEC’s petroleum production levels and world wide oil consumption and storage, changes in our operating expenses, including bunker prices, drydocking and insurance costs, the market for our vessels, availability of financing and refinancing, changes in governmental rules and regulations or actions taken by regulatory authorities, potential liability from pending or future litigation, general domestic and international political conditions, potential disruption of shipping routes due to accidents or political events, vessels breakdowns and instances of off-hires and other important factors described from time to time in the reports filed by the Company with the Securities and Exchange Commission, including the prospectus and related prospectus supplement, our Annual Report on Form 20-F, and our reports on Form 6-K. | https://www.nat.bm/nordic-american-tankers-limited-nysenat-fixed-3-year-time-charter-tc-plus-options-for-one-of-the-nat-suezmax-newbuildings-producing-cashflow-and-earnings/ |
Black Swan. How many times have you heard of Black Swan? Unfortunately it has nothing to do with Natalie Portman, but it is a much more "absolute" term. It describes an extremely unlikely event, but which if it occurs can cause gigantic upheavals. Two examples above all? The financial crisis of 2008 and ... everything we have seen since 2020.
By definition, no one sees a black swan coming, otherwise what black swan would it be? Unpredictable. Point. But the Stanford researchers are not the type to stop at the first (nor the hundredth) difficulty, and for this reason they are trying to change things. They are building a computational method to try to predict when the next "unpredictable" event will occur.
Can we predict a black swan?
“This work is exciting because it is an opportunity to take the knowledge and computational tools we are building in the laboratory and use them in reality. To better understand (and even predict) what happens in the world around us, ”he says Bo Wang, assistant professor of bioengineering at Stanford and senior author of the study.
Published in PLOS Computational Biology, the method is based on natural systems, and could be useful in environmental and health research. (Applications in other fields with black swan events, such as economics and politics, may come soon after.)
“Existing forecasting methods rely on past data to predict future data,” says Wang. "And that's why they tend to predict the predictable, not the unpredictable like a black swan." The new method, inspired by researcher Sam Bray, who works in Wang's laboratory, inserts an unknown element into the equation. It assumes we're only seeing a part of the world, and tries to figure out what's missing.
The science of the unpredictable
Bray had been studying microbial communities for years and in that time he had observed some events where a microbe exploded in the population, eliminating its rivals. Bray and Wang wondered if this could also happen outside the laboratory and, if so, if it could be predicted.
To find out, the two not only needed to find ecological systems in which this black swan had already occurred, but these systems also needed to have huge and detailed amounts of data, both on the events themselves and on the ecosystem.
For the development of the method, three data sets from natural systems were chosen: measurements of algae, barnacles and mussels on the Kiwi coast taken monthly for 20 years; levels of plankton of the Black Sea taken twice a week for eight years; and a Harvard study he carried out carbon measurements cleared from a forest every half hour since 1991.
The researchers processed all of this data using statistical physics. Specifically, they used models developed for avalanches and other natural systems with short-term, extreme, and unexpected physical fluctuations, the same qualities that distinguish a black swan-like event. Taking that analysis, they developed a method for predicting a black swan event.
A predictor of black swan! It works?
The method is meant to be open to variables such as species and time scale, allowing it to work even with lower quality data. Armed with fragments showing only minimal variations, the method accurately predicted the black swan event. It worked, yes.
Wang and Bray hope to broaden this "predictor" into other fields where a black swan can occur: economics, epidemiology and physics. The work joins a burgeoning field of artificial intelligence algorithms and computational models geared towards extreme events, including those intended to predict forest fires, assist in search and rescue at sea, and optimize emergency response. | https://en.futuroprossimo.it/2021/09/caccia-al-cigno-nero-possiamo-imparare-a-prevedere-eventi-imprevedibili/ |
5 trading mistakes to avoid In 2018
It is possible to point out various mistakes when trading. This is especially likely when you have some experience. If you are a beginner though, identifying the mistakes could be a challenge. In a market where the leverage is easily accessible to traders, the chances of making mistakes are even much higher. There are many mistakes that traders make in the attempt to beat the market.
Anyone who wants to succeed must take their time to identify them in order to navigate the market with ease. Since it is not possible to outline every single mistake, we will look at the top 5 trading mistakes which you are likely to make and which you should avoid in 2018.
1. Rushing to the market after major headlines
Almost everyone in the trading business keeps an eye on the current news. In fact, all skilled traders always ensure that they are up to date with what is happening in the markets. Staying informed is indeed an expert move that can allow you to trade better and take up new opportunities. It does not mean that you should rush to invest every time there is a major headline though. Not all news is good for predicting how the markets could go. In fact, it is always recommended to start by analyzing data before making moves based on news. News can offer a reliable source of market information but charts and analysis tool are the ultimate options for giving context to market headlines.
2. Averaging down when trading forex
One of the most common mistakes that forex traders make is averaging down. A lot of traders think that the few decimal points after figures do not count for much. This habit is however destructive and can lead to huge losses. For a start, when you average down your trades, you leave yourself in a vulnerable position where you must go up. Since the market is unpredictable, however, you could end up losing the money from the initial losing position that you choose and also lose money from the market trend. It is important therefore to keep the losing margins as small as possible even if it means defining your trade to the last coin.
3. Risking more capital than you are supposed to
It is important to know that excessive investing will not always lead to huge returns. No matter how good your chances might be, you will have that one trade which will spoil it for the others. You should not let that trade catch you by surprise. The longheld tradition in the world of trade is that investments should not exceed 1% of total capital. This is more so when several trades are being done on a particular trading day. When you stick to low-risk investments, you can be sure of lasting in the trading business for a prolonged time.
4. Having outrageous expectations
Being unrealistic when trading is common to most traders. This is because everyone has expectations about the market and these expectations are always positive. The market, however, operates under its own rules which are averse to the opinions of traders. The market is volatile, unpredictable and always dynamic. It is thus important to have a consistent plan which is based on data and which will not change regardless of the status of the market. A plan brings steadiness and sanity to a market that is otherwise errant. Recognizing these facts about the market can help a trader have more realistic expectations.
5. Failing to keep personal records
Traders who want to access historical information about the market can always refer to blogs, data charts, public records and many other sources. Historical market data is the best tool for creating winning strategies. Personal historical data is especially great for narrowing down the strategy to a few proven approaches. Many beginners fail to realize how important personal data is and they end up repeating the same mistakes for a long time. This results in losses that could otherwise be avoided.
Conclusion
Most traders suffer because they repeat some common mistakes for a long time either knowingly or unknowingly. Traders fail to achieve their goals because they expect too much from the market, fail to keep records or simply risk more than they are supposed to. Avoiding the above mistakes will highly reduce your losses in 2018. | https://www.talk-business.co.uk/2018/06/01/5-trading-mistakes-to-avoid-in-2018/ |
Toronto, Ontario (Newsfile Corp. - October 24, 2016) - Pancontinental Gold Corporation (TSXV: PUC) ('Pancon Gold' or the 'Company') is pleased to announce that the TSX Venture Exchange ('TSXV') has approved the graduation of Pancon Gold to the TSXV. Effective at market open on Monday, October 24, 2016 trading of Pancon Gold's common shares will commence on the TSXV under the trading symbol 'PUC'.
About Pancontinental Gold Corporation
Pancontinental Gold Corporation is a Canadian-based mining company focused on the exploration and development of the Jefferson Gold Project in South Carolina and on acquiring additional prospective gold properties. The Company's shares are listed on the TSX Venture Exchange, trading under the symbol PUC. In 2015, Pancon sold its interest in its Australian rare earth element (REE) and uranium properties, formerly held through a joint venture, and retains a 1% gross overriding royalty on 100% of future production.
ON BEHALF OF THE BOARD OF DIRECTORS
Rick Mark President & CEO
For further information, please contact:
Rick Mark President and CEO 1-416-293-8437 or [email protected]
For additional information, please visit our website at www.PanconU.com.
Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.
Cautionary Language and Forward Looking Statements
This news release contains forward-looking information which is not comprised of historical facts. Forward-looking information is characterized by words such as 'plan', 'expect', 'project', 'intend', 'believe', 'anticipate', 'estimate' and other similar words, or statements that certain events or conditions 'may' or 'will' occur. Forward-looking information involves risks, uncertainties and other factors that could cause actual events, results, and opportunities to differ materially from those expressed or implied by such forward-looking information. Factors that could cause actual results to differ materially from such forward-looking information include, but are not limited to, changes in the state of equity and debt markets, fluctuations in commodity prices, delays in obtaining required regulatory or governmental approvals, and other risks involved in the mineral exploration and development industry, including those risks set out in the Company's management's discussion and analysis as filed under the Company's profile at www.sedar.com. Forward-looking information in this news release is based on the opinions and assumptions of management considered reasonable as of the date hereof, including that all necessary governmental and regulatory approvals will be received as and when expected. Although the Company believes that the assumptions and factors used in preparing the forward-looking information in this news release are reasonable, undue reliance should not be placed on such information. The Company disclaims any intention or obligation to update or revise any forward-looking information, other than as required by applicable securities laws.
Click on, or paste the following link into your web browser, to view the associated documents
http://www.newsfilecorp.com/release/23156
News Source: Newsfile
24.10.2016 Dissemination of a Corporate News, transmitted by DGAP - a service of EQS Group AG. The issuer is solely responsible for the content of this announcement. | http://www.ad-hoc-news.de/boerse/news/corporate-news/pancontinental-gold-corporation-ca69834g1019/51689630 |
Wading through the vast quantities of information available on the Vancouver real estate market can be confusing as well as daunting. In our opinion, much of that information fails to provide a long-term perspective instead focusing primarily on monthly fluctuations and at best, annual ones. We hope to correct this shortcoming by putting our current market into a broader context…that of the last ten years.
The Historical Pattern
If you’ve been watching any of the recent real estate headlines, you’ll be aware that the Vancouver market is currently rebalancing after a long upward swing. Over the course of the last decade, there have been two other periods where a downward shift in the market occurred. The first, and most significant, was in 2008 (late-spring until early-winter) when we saw a 15% peak to valley correction in prices, and 2012 (early-fall through late-winter) where there was a 5% correction.
What is important to note in each of these past situations is how quickly the market snapped back from weakness. In the case of 2008, the market saw significant weakness from the summer through the winter of that year. Early in 2009 the market returned with a vengeance and by the early fall of that year sales and prices had returned to the peak seen in 2008. The same thing happened in 2012 when prices and sales volume returned to peak levels within three months of the low.
The charts below demonstrate these shifts in the market. We’ve included data for the key years 2008, 2009, 2012, 2013 and 2016 as well as the 10 year averages. These charts are based on REBGV MLS data, but you won’t find these charts anywhere else as we’ve compiled this information just for you.
Sales in the Historical Context
You can see that our current year is mirroring 2008 and 2012 with a slowdown in sales. Note how quickly sales shot back in 2009 and 2013 after the rebalancing that took place in 2008 and 2012. Will early 2017 follow this same pattern?
Active Listings Today vs Yesterday
Where today’s market diverges from these past cases is the number of listings on the market. This is a significant difference.
We are currently at the opposite end of the spectrum when it comes to supply levels (at an historically low level of supply, >25% below the 10 year average). Typically, when the market is slowing down inventory levels build (in 2008 they were 52% higher than the 10 year average and in 2012 25%). The added supply results in a downturn in prices. This is NOT what is currently happening. We are seeing historically low sales numbers, but not the typically concurrent rise in inventory. | https://duncanbrown.ca/market-update/2016/11/vancouvers-current-real-estate-market-historical-context/ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.