content
stringlengths
71
484k
url
stringlengths
13
5.97k
WHITE PLAINS — The New York Power Authority (NYPA), the largest state public power organization in the nation, will test, model and develop innovative solutions for energy systems at its world‑class research and development facility – the Advanced Grid Innovation Laboratory for Energy (AGILe) – at its White Plains headquarters. With expertise and support from the Electric Power Research Institute (EPRI), the lab will simulate the impacts of new technologies before they are deployed on New York’s electric grid, allowing NYPA and other research participants to evaluate their effects on system reliability, performance, and resiliency. The research aims to also help renewable resources come online more quickly and integrate more effectively to the New York state grid. “AGILe will allow researchers to more quickly model the system and identify any potential issues – especially as more renewable energy sources, like wind, solar, and energy storage, are brought online,” said Gil C. Quiniones, NYPA president and CEO. “AGILe’s ability to simulate how new technology will interact with our transmission system will solidify NYPA and New York State as leaders in grid modernization and create models for other utilities to use in their power systems across the country.” The first phase of development at the lab, scheduled to be complete by the end of 2018, involves the creation of a digital, real‑time simulation model of the entire New York State transmission system. Once the model is complete, researchers from government, industry, and academia will be able to use advanced computing methods to simulate the implementation of new technologies for better forecasting and planning and to assist with the commercialization of emerging technologies. AGILe will focus particularly on advanced transmission applications, cybersecurity solutions, sensors, substation automation, and power-electronics controller technologies. “This is part of the industry-leading effort to make wind, solar, storage and customer resources (like flexible loads, batteries and electric vehicle charging) all part of an integrated grid. We are very excited to coordinate the research at the AGILe lab with the overall integrated grid research at EPRI,” said Mark McGranaghan, EPRI’s vice president, distribution and energy utilization. The initiative also will foster collaboration and research with other participants in the state’s energy sector to strengthen infrastructure, improve efficiency, and encourage expanded use of renewable energy resources. In a memorandum of understanding, the New York transmission owners and other key energy leaders in the State have all agreed to conduct collaborative research with NYPA at AGILe. Approximately $20 million has been approved for implementation and lab activities so far. An essential component of NYPA’s Vision 2020 Strategic Plan, AGILe advances NYPA’s digital transformation and furthers Governor Andrew M. Cuomo’s Reforming the Energy Vision (REV) strategy by informing new and innovative ways to build a smarter, cleaner, and more reliable power grid. NYPA owns and operates approximately one‑third of New York’s high‑voltage power lines. These lines transmit power from NYPA’s two large-scale hydroelectric generation facilities, connecting more than 6,000 megawatts of renewable energy into New York State’s power grid. “As our grid continues to evolve --- as we get smarter, cleaner, more data-intensive --- we need faster, more secure systems to get the most from our data and from our grid; we can make better decisions in real-time, and ensure safe, reliable, affordable, and environmentally responsible power delivery for the benefits of New Yorkers, and consumers and society as a whole,” said Alan Ettlinger, director of Research Technology Development and Innovation at NYPA. For more information, visit the AGILe webpage on NYPA’s website. Photos of the AGILe lab and its key researchers are available here. About NYPA NYPA is the largest state public power organization in the nation, operating 16 generating facilities and more than 1,400 circuit-miles of transmission lines. More than 70 percent of the electricity NYPA produces is clean renewable hydropower. NYPA uses no tax money or state credit. It finances its operations through the sale of bonds and revenues earned in large part through sales of electricity. For more information visit www.nypa.gov and follow us on Twitter @NYPAenergy, Facebook, Instagram, Tumblr and LinkedIn.
https://www.nypa.gov/news/press-releases/2018/20180913-rd
Broken Hill, known as ‘The Silver City’, is the largest centre in Outback NSW and is often referred to as “The Jewel of the Outback”. Located 1,160 km from Sydney, the city came into existence in 1885 after Charles Rasp recognized the mineral potential of the area two years earlier. Today the city sits on one of the world’s largest known silver-lead-zinc lodes – a deposit which is 7 km long and over 220 metres wide. The city is large and prosperous. Over the years it has become the state’s premier desert centre known for its outstanding Outback artists, its rich indigenous culture, its wonderful Living Desert Reserve including the international stone sculptures (Sculpture Symposium), and its easy access to a rich diversity of desert landscapes. No visit to Broken Hill is complete without spending time in the city’s excellent art galleries, walking down the main street and admiring the old hotels (one of which featured in Priscilla, Queen of the Desert) and gracious public buildings, visiting the Thankakali Aboriginal Arts and Crafts Centre, travelling out to the semi-ghost town of Silverton (where so many Australian films are shot) and going on a conducted tour of the town’s great mining complex and the Royal Flying Doctor Service. Strategically located, it is the ideal place to service all the needs of the Outback traveller and provide a base to explore the Darling River (The Darling River Run), Mutawintji and Kinchega National Park, and the Opal town of White Cliffs. Broken Hill is also a transport hub allowing travellers to access Outback NSW by Rail (Country Link) and air (Rex). Broken Hill is ideally located to further explore the outback and some wonderful towns like Silverton. Located only 24 km from Broken Hill, Silverton has become a popular destination for both tourists and film crews. The tourists come to experience a real ghost town and consequently there are a number of shops, art galleries, museums and pubs which have grown up to meet their needs. Its location (drive a few kilometres out of town to ‘The Breakers’ and marvel at the desert which stretches to the horizon) and its proximity to all the creature comforts of Broken Hill have ensured that it is popular with any film crew needing to shoot some desert and Outback scenes. In fact the Silverton Hotel at the Burke St corner has been used in films as diverse as Wake in Fright, Mad Max 2, A Town Like Alice, Hostage, Razorback , Journey into Darkness and Golden Soak. When you visit make sure you have a drink in the historic pub, visit the Silverton Pioneer Museum, stop off at the Gaol and Court House and inspect some of the local galleries. Here is a true Outback experience and you can return to the comfort of Broken Hill feeling no pain or hardship at all.
https://outbackbeds.com.au/farm-stay-towns/broken-hill/
Okavango – the Dream Delta Destination! Deep within the Kalahari Basin and often referred to as the ‘jewel’ of the Kalahari, is a unique pulsing wetland known as the Okavango Delta. In this desert, it is remarkable that this alluvial fan, the Delta, is there at Etosha National Park – An African safari made in heaven Visible even from space, this vast and beautiful wildlife preservation area sits upon one of Africa’s largest salt pans. Etosha National Park is unspoilt and quite spectacular and lies within Namibia in Africa, sometimes named the ‘Jewel of Africa.’ Kruger National Park – the ultimate African safari destination It is not without reason that the iconic Kruger National park in South Africa has become renowned as possibly the best wildlife preservation area in the world. Travel Trends 2019 – Top 5 Way back in the1860’s, a traveller, explorer and artist by the name of Thomas Baines immortalized a cluster of gnarly old baobab trees, located on the edge of a remote salt pan, in a now rather famous water colour painting.
https://www.safariodyssey.com/category/travel-blogs/our-favourite-places/
Oil painter Janice Druian’s exhibit of new artworks, “High Desert Light”, opens at Tumalo Art Co. July 6, from 4-8pm during the First Friday Gallery Walk in Bend’s Old Mill District. Inspired by such great painters as Maynard Dixon and Edgar Payne, Janice is drawn to the limitless vistas of the high desert, it’s monumental cloud formations and has been capturing the dramatic light as seen in the high desert for many years. Janice Druian combines travel and art Traversing the back country in her tiny trailer with husband and dogs, there is no scenery that touches Janice’s soul more than the high plateaus of Central and Eastern Oregon and Northeastern California. In the paintings she creatres her focus is the fleeting moments at sunrise or sunset when the oblique light creates a special magic often referred to as the “golden hour.” Usually accompanied by dramatic cloud formations, these landscapes offer endless inspiration and capture the essence of the western high desert. An award-winning oil painter, Janice was invited to the Borrego Springs Plein Air Invitational for four years in a row. Janice won Best of Show Los Gatos California Invitational in 2012. This year she has been invited to the juried show at the High Desert Museum for the fifth time and the Favell Museum Art Show and Sale for the fourth year. Janice was featured along with nine other artists in Artists’ Magazine in 2017. Tumalo Art Co. is an artist-run gallery in the heart of the Old Mill District in Bend, Oregon. The gallery is open seven days a week and hosts openings during Bend’s First Friday Gallery Walk every month. For information call 541-385-9144.
https://tumaloartco.com/oil-painter-janice-druian-opens-july-exhibit/
Wildflowers are one of the West’s most photo-worthy and prolific annual offerings. And while there are well-known parks with lavish displays, you likely don’t have to leave your neighborhood to see native blooms. They blanket hillsides, spring up near roadways, and turn desert landscapes into multi-colored portraits. Out of the hundreds of wildflowers that grow from the Bay Area to the Sonoran Desert, the nine highlighted below are particularly easy to identify and bountiful enough to pop up on your daily walk. Here’s how to spot them. California Poppy (eschscholzia californica) In 1903, the “golden poppy” was designated as California's state flower and its bright orange petals became a long-standing floral representation of the 1849 Gold Rush. Today, these fields of gold are found throughout the West, from southern Washington to the Sonoran desert. Look for them along roadsides and in grassy fields and meadows. They’re easily identified by their blue-green foliage and long stems topped with orange or yellow blooms, each sporting four distinct petals. The petals close up at night, or during inclement weather, and open again with the sun. Smart Tip: Remember to never pick any flower in the wild (in some cases—particularly if you are on state, federal, or private land—it may actually be illegal) and to always stay on designated paths to avoid trampling sensitive areas. Seaside Daisy (erigeron glaucus) Native to beaches, coastal bluffs, and sand dunes along the coastline of California and Oregon, the seaside daisy (also known as seaside fleabane) is recognizable by its branching stems, rounded leaves, and clusters of tiny flowers. Each flower is made up of hundreds of thin, feathery petals that surround a golden yellow center. They typically range in color from ice-blue to a near-white lavender. Butterflies love them, and their long blooming season—mid-spring until late summer—make them easy to spot. Desert Marigold (baileya multiradiata) Also known as “paper daisies,” these brightly colored flowers are about two-inches in diameter and feature daisy-like yellow petals with a mustard center. They’re long-stemmed and bloom sporadically—popping up anytime between March and November. You'll often see them growing in sunny desert plains and mesas, and along roads across the southwest, including around Las Vegas, Tucson, and southwest Utah. When their luster dies off in the fall, their seeds attract captivating black-throated sparrows. Beavertail Cactus (opuntia basilaris) Found in rocky or sandy soils across the southwest, especially in the Mojave and Colorado deserts and Red Rock Canyon outside of Las Vegas, the beavertail cactus is known for its stunning magenta-colored blossoms. The plant itself is low-growing and bush-like, comprised of numerous flattened pads that are prickly to the touch and blue-gray in color. They’re not much to look at initially, but when the cacti start blooming in the spring into early summer, their large, sun-loving flowers are an absolute sight to behold. Desert Globemallow (sphaeralcea ambigua) Sometimes called apricot mallow because of the color of the shrub's flowers, desert globemallow is also native to the Southwest. It is prominent in landscapes across the Mojave Desert that are heavy with low-lying desert scrub and along roadsides in Arizona and the southern portions of California, Nevada, and Utah. You may also spot it growing in arid areas that were recently decimated by forest fires as it is one of the first plants to return. One root produces hundreds of stems, and many of them showcase their own cup-shaped blossoms come spring. Chuparosas (justicia californica) This low-lying shrub produces magnificent bright-to-deep-red flowers that are long and tubular in shape and grown in clusters. These flowers act as drinking vessels for hummingbirds, who access their nectar through each floret’s three-lobed lower lip. They appear regularly in the sands and rocky terrain around Phoenix, as well as throughout the deserts of the larger southwest, typically during winter and in the early spring. Smart Tip: See where desert wildflowers are blooming in real time at DesertUSA. Parry’s Penstemon (penstemon parryi) There are hundreds of types of penstemon (or beardtongues) that grow across the American West. Their tubular, nectar-rich flowers are an easy lure for hummingbirds and bees. While they range in shades that include white, red, blue, and purple, Parry’s Penstemon is known for its bright pink flowers that add a distinctive pop of color to south Arizona's desert landscape when it blooms during March and April. Dozens of blossoms appear in long spires atop the upper portion of the plant’s long stems, which give it an elegant appearance. While it can be difficult to spot in the wild, it’s a favorite among home gardeners. Desert Purple Sage (salvia dorrii) Bees take full advantage of this small shrub’s heavy blooms, a burst of brilliant pale-blue to purple that occurs each spring. They grow in abundance across the dry, rocky desert of the Great Basin range, which covers parts of multiple Western states, including Idaho, Oregon, and Utah. The first thing you may notice is the plant’s minty scent, which is most notable after rainfall. It’s also identifiable by its silvery, oval-shaped leaves and flower spikes, meaning its flowers grow directly off the stem, rather than from individual stalks. Chicory (cichorium intybus L.) A member of the dandelion family, chicory grows throughout the West, often along roadsides, beside suburban fences, and across undeveloped lots. Though the European native is an invasive species in the United States, it was planted across the country and used to feed cattle. Its flowers are picturesque, with fringed ray-like petals that grow in shades of periwinkle blue, pink, and white that open and close with the sun. Its leaves and flowers are edible, and its root is often used as a coffee substitute. Chicory typically blossoms from late spring through mid-fall.
https://mwg.aaa.com/via/national-parks/identify-wildlfowers-west
A member of our team will pick you up from your accommodation in Marrakech and we’ll travel to the Dades Valley via the High Atlas Mountains through the scenic Tizi n’Tichka pass. The landscape changes as we head south.We’ll arrive to Ait ben Haddou Ksar, the largest and oldest ksar in Morocco. It is a UNESCO heritage site. Several movies have been filmed here, like The Gladiator, Lawrence of Arabia, Game of Thrones and The Jewel of the Nile. Next stop will be in Ouarzazate, known as the African Hollywood to explore the movie studios and the Taourirt Kasbah before continuing to the Dades Valley via the Rose Valley. Overnight in dades valley including dinner and breakfast. After breakfast at your hotel we’ll head to the Todghra Valley and Gorges, passing through a lots of Ksours along the way. We’ll stop for a walk in the Todghra Gorges, considered the tallest and narrowest gorges in the country, then we will carry on toward Tinjdad, where you can have lunch. Continuing toward Merzouga. Overnight in Merzouga including dinner and breakfast. In the morning after breakfast, we will take you in an excursion through the golden dunes of Sahara desert where you can get the chance to see the black volcanic rocks, and meet a nomad family, see their nomadic lifestyle, share a cup of tea with them and see how they tailor authentic carpets. Lunch will be somewhere in Merzouga. Next we will go to Khamlia which is a village of the dark-skinned people where you can enjoy the original music and folk songs of the Gnaouas. In the afternoon you will have your camels waiting for you to take you to your berber desert camp to spend the night of a lifetime under the lightof thousands stars. Overnight in Berber desert camp including dinner and breakfast. In the morning, your camel guide will awaken you to enjoy the beauty of the spectacular sunrise over the sand dunes. After your breakfast in the middle of the dunes, you’ll ride your camels back to the village of Merzouga. Then we will start our journey to Fes driving through Ziz vally, the cedar forest where with luck you will see some monkeys. Next stop will be in Ifrane which is known as the Little Switzerland.We’ll arrive in Fes by the afternoon.
https://fesdaytours.com/tour/4-days-sahara-desert-tour-from-marrakech-to-fes-via-erg-chebbi-dunes/
Should you visit Jodhpur or Jaisalmer? Trying to figure out where to travel next? This travel cost comparison between Jodhpur and Jaisalmer can help. For many travelers, the cost of a destination is a primary consideration when choosing where to go. Jodhpur Jodhpur is a city in the Rajasthan state of India. This city is located on the outskirts of the Thar Desert and is known for having sunshine almost every day of the year. Rao Jodha, an ancient Rajput chief, founded the city in 1459 AD. Many of the houses in the city are shades of blue, hence the reason that the city is sometimes referred to as the Blue City. Jaisalmer Jaisalmer is located in western part of the Rajasthan state of India. The city is known as the "Golden City" for its Sonar Qila, or 'Golden Fort'. This fort is different than other forts in India in that it is a living fort, meaning that it contains many hotels, homes, and shops for locals and visitors. The city is very close to the Thar Desert, to which excursions can be made. Which city is cheaper, Jaisalmer or Jodhpur? These are the overall average travel costs for the two destinations.
https://www.budgetyourtrip.com/compare/jodhpur-vs-jaisalmer-1268865-1269507
We stare into each others eyes, our amber-colored eyes, the ones with garden iris longings, when reveling in memories we sculpted one by one. Neon moonlight walks and sweeping mountain paths, orange red clouds that fluttered over zesty morning hearts, Ice pink salmon picnics in chalk downlands. Days we danced on silver desert strands, blossom feathered doves who wrote the soundtrack to our bliss. Golden wedding peaks about to acme, as we lightly float on ink blue satin sheets. Us passengers in twilight worlds beyond forever more. Comments Oh my gosh, MyNAh, so Oh my gosh, MyNAh, so exhilarating and romantic. Words cannot express how deep your poem is. Another jewel in your poetry crown. Enthusiastic Best Wishes! ♡ Hugs Regina Report SPAM Dear Regina,
https://www.poetrynook.com/content/acme
The Desert of Ro is a vast and deadly desert that is surrounded by the city of Freeport, Innothule Swamp, the Commonlands, and the Timorous Deep Ocean. A variety of creatures inhabit the desert and the Oasis within making it a thrilling place for travellers to adventure. LoreEdit North RoEditLegends speak that what is now the northern Desert of Ro was once the northern reaches of the ancient Elddar Forest - the former home of the elves who had once resided upon Antonica, then called Tunaria. This forest was said to have been the most beautiful woodland Norrath has ever seen spanning the whole of what is now the desert of Ro and the Oasis of Marr, this forest is said to have been burned to the ground by the fiery wrath of Solusek Ro. Oasis of MarrEdit Within the merciless, dry grip of the Northern and Soutern Deserts of Ro, the beautiful Oasis of Marr stands as the desert jewel of Antonica. It is said that one of the Triumvirate of Water, Tarew Marr, took pity on some Humans who had gotten lost in the Desert of Ro and created the Oasis upon a lake, much to the displeasure of The Tyrant of Fire.
http://everquest.wikia.com/wiki/Desert_of_Ro_Lore
One of the most beautiful and historically rich cities in the entirety of India, Jaisalmer is often referred to as ‘The Golden City’, owing to its wealth of yellow sandstone. A small, remote city located in the heart of the Thar desert, Jaisalmer is filled with amazing historical temples and buildings and is well-known for its long-standing association with Jainism. Jaisalmer Fort One of India’s most unique and interesting historical sites, Jaisalmer Fort is considered to be arguably the only functioning fort left in the world. Built in the mid-12th Century, it remains a focal point of the city of Jaisalmer today. Built under the orders of Rawal Jaisal, the city’s founder, the fort played a major role as Jaisalmer assumed an important role in trade during the heyday of the Silk Road. The fort is known for its distinct design, its spectacular views of the surrounding area and the temple it houses. Gadisar Lake One of the most picturesque sights in the region, Gadisar Lake, otherwise termed the ‘Lake of Flowers’, is amongst the standout destinations of the Kashmir Valley. Located 5000 metres above sea level, it is a major hiking and fishing hotspot in the region. Offering unparalleled sights, it is one of the most beautiful yet under-seen natural destinations in India. Patwon Ki Haveli Jaisalmer is well-known for its many Havelis, which are traditional mansions across the Indian subcontinent. Patwon Ki Haveli is arguable the best0known of the Havelis in the city. Located a short walk away from the Jaisalmer Fort, this Haveli is known rot is opulent interior and intricate carvings. It is worth doing a more comprehensive tour of the city’s Havelis, but this is without a doubt worth seeking out for its sheer aesthetic beauty. Bada Bagh Located in the outskirts of the city, Bada Bagh, or ‘Big Garden’ is one of the area’s most beautiful historical sites. Built in the mid-18th Century, the Bad Bagh was constructed following the death of Jai Singh II on the orders of his son. There are numerous inscriptions dedicated to his ancestors. A peaceful and contemplative site with major historical significance. Desert Cultural Centre and Museum One of the more off-beat museums in the country, the Desert Cultural Centre & Museum presents an informative look into the history of Rajasthani culture. An immersive and insightful experience, the museum is well worth checking out, especially for the nightly puppet shows.
https://www.pilotguides.com/articles/top-5-things-see-jaisalmer/
One of the most beautiful and historically rich cities in the entirety of India, Jaisalmer is often referred to as ‘The Golden City’, owing to its wealth of yellow sandstone. A small, remote city located in the heart of the Thar desert, Jaisalmer is filled with amazing historical temples and buildings and is well-known for its long-standing association with Jainism. Jaisalmer Fort One of India’s most unique and interesting historical sites, Jaisalmer Fort is considered to be arguably the only functioning fort left in the world. Built in the mid-12th Century, it remains a focal point of the city of Jaisalmer today. Built under the orders of Rawal Jaisal, the city’s founder, the fort played a major role as Jaisalmer assumed an important role in trade during the heyday of the Silk Road. The fort is known for its distinct design, its spectacular views of the surrounding area and the temple it houses. Gadisar Lake One of the most picturesque sights in the region, Gadisar Lake, otherwise termed the ‘Lake of Flowers’, is amongst the standout destinations of the Kashmir Valley. Located 5000 metres above sea level, it is a major hiking and fishing hotspot in the region. Offering unparalleled sights, it is one of the most beautiful yet under-seen natural destinations in India. Patwon Ki Haveli Jaisalmer is well-known for its many Havelis, which are traditional mansions across the Indian subcontinent. Patwon Ki Haveli is arguable the best0known of the Havelis in the city. Located a short walk away from the Jaisalmer Fort, this Haveli is known rot is opulent interior and intricate carvings. It is worth doing a more comprehensive tour of the city’s Havelis, but this is without a doubt worth seeking out for its sheer aesthetic beauty. Bada Bagh Located in the outskirts of the city, Bada Bagh, or ‘Big Garden’ is one of the area’s most beautiful historical sites. Built in the mid-18th Century, the Bad Bagh was constructed following the death of Jai Singh II on the orders of his son. There are numerous inscriptions dedicated to his ancestors. A peaceful and contemplative site with major historical significance. Desert Cultural Centre and Museum One of the more off-beat museums in the country, the Desert Cultural Centre & Museum presents an informative look into the history of Rajasthani culture. An immersive and insightful experience, the museum is well worth checking out, especially for the nightly puppet shows.
https://www.pilotguides.com/articles/top-5-things-see-jaisalmer/
What would you like to do now? MLS #: 5750639 Bedrooms: 5 Bathrooms: 3.5 Sq Feet: 5,190 HOA: N City: Paradise Valley Zip: 85253 Sub Division: Pebble Ridge To start a new search please click: Start New Search Property Description Breathtaking views of Camelback Mountain from this 1+ acre lot located in one of the most desirable neighborhoods of Paradise Valley. Lush resort like backyard, beautifully landscaped with a newly remodeled sparkling diving pool. Backyard desert oasis includes golden eagles nesting atop of one the tallest eucalyptus trees in the valley. Home is located on a quiet street at the end of a Cul de sac with 360 degree views of Camelback Mountain and the Phoenix Mountain Preserve. Recently remodeled kitchen features ammonite fossil counter tops from the Rare Earth Gallery. Close to airport, shopping and fine dining. Hopi/Arcadia School District. This jewel in Paradise Valley should not be missed!
http://silverhawkaz.com/4514-e-pebble-ridge-road-paradise-valley-85253-idx-5750639
Although the Desert Lion Conservation website was static since June 2017, the research and monitoring activities have continued. A summary of some activities are presented under the various headings below. Desert Lion Conservation, or the “Desert Lion Project”, as it is often referred to, is a small non-profit organisation dedicated to the conservation of desert - adapted lions in the Northern Namib. Our main focus is to collect important base-line ecological data on the lion population and to study their behaviour, biology and adaptation to survive in the harsh environment. We then use this information to collaborate with other conservation bodies in the quest to find a solution to human-lion conflict, to elevate the tourism value of lions, and to contribute to the conservation of the species.
https://www.desertlion.info/
Welcome Navigators! We are excited for our upcoming hike on Saturday, January 28. To help us prepare for our hike we will be holding a Zoom meeting on January 23 at 6pm and also we will be welcoming any new members. The meeting will serve as a space for folks to ask questions about the hike and be sure everyone knows the safest way to prepare for our outing. For anyone that can’t make the January events we will be holding a Social meetup at UUCP on February 13 and another hike on February 25. We hope you and your family can make it to some of our upcoming events. Join Zoom meeting Meeting ID: 856 5769 0773 | Passcode: 387129 Saturday, January 28, 10am at Jewel of the Creek Preserve: Join us for an all ages family friendly hike at Jewel of the Creek Preserve. The hike will highlight plants and trees of Arizona riparian plant communities as well as classic desert flora and fauna. We will also discuss what a plant community is and the importance of conservation and diversity of our desert plant communities. Prior to our hike we will also have a brief topographic map and compass orientation to better familiarize ourselves with our terrain. Jewel of the Creek map and directions. Follow this link for more information about the preserve. Read more about desert plant communities here. Upcoming Events: Navigators Social Meetup and Brainstorming Monday, February 13 – 6:00pm UUCP Johnson Room and Patio Family Friendly Hike Saturday, February 28 – 5:00pm; Hike is located at the Gilbert Riparian Preserve and will include possible Star Gazing at the observatory if the clouds agree. Our March theme for Navigators will be Habitat loss and animal rehabilitation. We will hold a zoom meeting on Monday March 13, at 6:00pm.
https://www.phoenixuu.org/navigators-update-january-2023/
The fourth annual Desert Discovery Day sponsored by the nonprofit Foothills Land Trust will be held Saturday, Nov. 21, at the Jewel of the Creek Preserve in Cave Creek. Founded in 1991, the nonprofit Land Trust has protected 680 acres on 23 preserves in the North Valley, many of which are open to the public for recreation and exploration. The Land Trust connects people to nature through land acquisition and long-term stewardship, as well as events and activities that allow the community to experience these special places. This year’s event will include a “scavenger hunt” of informational stations along the Harry Dalton Trail. Children will receive a stamp at each station, and they’ll receive a goody bag for collecting the stamps. There will be live animals, crafts, rehabilitated raptor releases and refreshments. Other participating organizations include the Arizona Archaeological Society, Cave Creek Museum, Desert Awareness Committee, Rural/Metro Fire Department, Southwest Wildlife Conservation Center, Spur Cross Ranch Conservation Area and Wild At Heart. “This has become an incredibly popular event for our community because it’s just so much fun! We love to get families and kids of all ages out on the land,” says Land Trust Executive Director Sonia Perillo. “This is also a great way to encourage support for conservation and healthy outdoor activities.” The event takes place 10 a.m.-2 p.m. Admission is free. Details about the Land Trust and Desert Discovery Day are available at www.dflt.org. The Jewel is located on Spur Cross Road, 3.9 miles north of Cave Creek Road. Parking is available at Spur Cross Ranch Conservation Area. Desert Foothills Land Trust works with landowners, communities and partners to protect the most special and important natural areas in the Arizona communities of Carefree, Cave Creek, North Scottsdale, North Phoenix, Anthem, Desert Hills and New River. The Scottsdale Independent publishes a free daily newsletter. A print edition is mailed to 75,000 homes and businesses each month. If you value our journalistic mission, please consider showing us your support.
https://www.scottsdaleindependent.com/entertainment/desert-foothills-land-trust-hosts-free-family-fun-at-jewel-of-the-creek-preserve/
Desert Sandstone is compressed desert sand, which is typically dug up. It is found in deserts about sea-level. Metaphysical Properties creativity, mood swings, temperament, sexuality, Element: All Chakra: Sacral Desert Sandstone is a highly creative stone. It blends all four elements making an unspeakable force of creation, it is often referred to as the creativity stone. It also aids One's control over their moods and temperament. The physical energetic properties helps strengthen bones, hair, nails and increases eyesight.
https://supernovacrystals.com/products/desert-sandstone-1
Dubai: Did you know that an ancient civilization that goes back more than 3,000 years was found in Dubai? Set in a spectacular desert landscape south of modern emirate, the site known as Saruq Al Hadid is an archaeological treasure trove first discovered in 2002 by His Highness Shaikh Mohammad Bin Rashid Al Maktoum, Vice President and Prime Minister of the UAE and Ruler of Dubai. The site is an important archaeological find which shows that Dubai did not become a centre of trade only in the 21st century, or after the development of the Dubai Creek. The Saruq Al Hadid site, preserved by the dunes that covered it for millennia before being discovered, provides ample evidence that the emirate had strong trade links with a wide swathe of the ancient world — from Egypt to the Indian subcontinent, dating as far back thousands of years. A jar, one of the 12,000 pieces found from the archaeological site. (Zarifa Fernandez / Gulf News) Today, experts consider Saruq Al Hadid as the jewel in the crown of Dubai’s archaeology, a site of international significance that increases he understanding of industrial activity, trade and everyday life during the Iron Age. How did they arrive at that conclusion? Some 12,000 pieces of archaeological pieces were found here so far, making it one of the largest and most important Iron Age sites in the Arabian Peninsula. Together, these items tell a compelling history of Dubai's past, through which archeologists are able to map the trading links that existed between Dubai and other countries in the region during the Iron Age. These priceless Sarouq Al Hadeed pieces are now on display in a museum in Shandagha, Bur Dubai. Incense burners found at the site. (Zarina Fernandez / Gulf News) The exhibits include golden, bronze and metal foundry that discovered at Sarouq Al Hadeed Archaeological Site in the Rub' Al Khali desert (the Empty Quarter) South of Dubai. In fact, the logo of Dubai Expo 2020 was inspired from a golden ring that was round at the site. Gold jewellery found at the Saruq Al Hadid archaeological site south of Dubai (Supplied) Arrowheads from Saruq Al Hadid site. (Youtube screen grab) The rich collection of artifacts reveal that the Saruq Al Hadid site was one of the main centers of copper manufacturing of various tools in the region since the beginning of the Iron Age. The site contains large amounts of metal ores and remains of domesticated animals that date back to thousands of years.
https://gulfnews.com/entertainment/arts-culture/know-the-uae-archaeological-site-found-south-of-dubai-shows-key-aspects-of-life-in-iron-age-arabia-1.1924218
Not far from the Gulf of Aden, in the Arabian Sea, lies an extraordinary island. Spread over 3,500 square kilometres, it offers a mosaic of landscapes combining sandy white beaches, desert plains, rugged peaks and mysterious caves. This jewel of nature is called Socotra and is the largest island in the archipelago located off the coast of Yemen. It has been a UNESCO World Heritage Site since 2008 and is sometimes referred to as the 'Galapagos of the Indian Ocean.' According to studies, Socotra has hundreds of species that exist nowhere else, some of which are very unusual. Such is the case for the Socotra dragon tree (Dracaena cinnabari), a tree that looks like a giant upturned parasol. Or a mega mushroom, depending on your point of view. A leafy crown perfectly adapted to the environment This plant, which can grow up to 12 metres tall, has evergreen foliage that grows in an unusual way. The leaves emerge only at the tips of the youngest branches and point skywards, forming a green crown that appears to rest in a flat-ish top on the tree's tangled arms. Although it looks strange, this shape is highly adapted to the arid conditions in which the genera dwells. The crown of leaves captures rain and moisture from the air and redirects it to the branches and trunk, reducing evaporation and providing shade for branches. Thanks to this, D. cinnabari is able to withstand high temperatures and drought, taking advantage of the scarce water its environment offers. But this is not its only characteristic. Like other dragon trees, this tree is also unique because of its reddish resin, called dragon's blood. Legendary sap It is this substance that gives it its name. The name is taken from the myth of Hercules (or rather Heracles) and his twelve labours. According to the legend, one of them consisted in stealing the golden apples from the Garden of the Hesperides, which were guarded by a hundred-headed dragon, Ladon. To reach the fruit, Hercules killed the beast and its blood spilled on the ground, giving birth to dragon trees. With a little imagination, one can indeed see in the branches of the tree a kind of multi-headed creature and on its trunk, bark like reptile skin. However, the legend of the dragon tree does not end with the adventures of Hercules. Its resin has been used since ancient times in traditional medicine. It is said to have anti-bacterial and anti-inflammatory properties. It has also been used for a long time as a colouring substance or as a varnish, particularly for violins.
https://www.gentside.co.uk/earth/the-dragon-s-blood-tree-is-one-of-the-strangest-trees-in-the-world_art7683.html
Do you like this video? The Daalang system was a star system located in the Mid Rim at the coordinates Q-12, containing the astronomical object Daalang. It was situated between the Exodeen system and the Kupoh system. Following the Battle of Yavin, an Imperial Navy Immobilizer 418 cruiser trapped the Rebel Alliance ship Desert Jewel in that system. Rebel operatives Luke Skywalker, Nakari Kelen, and R2-D2 were traveling aboard the Desert Jewel with the rescued Givin cryptologist Drusil Bephorin. Following a brief dogfight, Skywalker destroyed the Immobilizer 418 cruiser with a Utheel Rockcrusher Compact Seismic Charge. Appearances[edit | edit source] - Heir to the Jedi (First appearance) Notes and references[edit | edit source] - ↑ 1.0 1.1 1.2 1.3 Star Wars: The Force Awakens Beginner Game establishes that Daalang is located in the Mid Rim, situated at the coordinates Q-12. As such, the Daalang system must share the same location. - ↑ 2.0 2.1 Heir to the Jedi In other languages Community content is available under CC-BY-SA unless otherwise noted.
https://starwars.fandom.com/wiki/Daalang_system
PALM DESERT, Calif. -- Private lands in the southern California desert will have a large role to play in fighting climate change, according to conservation groups. The U.S. Secretary of the Interior signed the Desert Renewable Energy Conservation Plan last week, which covers 11 million acres of federal land. It set aside 600 square miles for energy development zones, while protecting habitat for such species as the bighorn sheep, desert tortoise and Mohave ground squirrel. Kim Delfino, California program director with Defenders of Wildlife, said that during phase two, counties will identify private lands that are considered "low conflict" and thus more suitable for renewable energy development. "The idea is that the more degraded lands will have a role to play in hopefully being where projects will get built,” Delfino said. "And the more intact desert lands - which actually have a climate benefit because it sequesters carbon - those will remain intact." Private lands are often closer to transmission lines and population centers where energy is needed. Los Angeles, Inyo and Imperial counties have already started their land planning processes. The focus will now move to the west Mojave Desert that stretches across San Bernardino and Kern Counties. Erica Brand, California Energy Program director at The Nature Conservancy, said her group has done multiple analyses to identify the best locations for renewable energy, and found that most are on private land. She praised the state and county land planners and wildlife managers who are working on phase two of the Desert Renewable Energy Conservation Plan. "California is leading by example and showing the world that we can have a strong clean-energy economy while protecting nature,” Brand said. Phase two of the plan is expected to take several years to complete.
https://www.publicnewsservice.org/2016-09-19/public-lands-wilderness/desert-renewable-energy-conservation-plan-phase-two-kicks-into-gear/a54093-1
When it comes to evaluating an optimization algorithm, every researcher has their own thoughts on the way it should be done. Unfortunately, many empirical evaluations of optimization algorithms are performed and reported without addressing basic experimental design considerations. This section provides a summary of the literature on experimental design and empirical algorithm comparison methodology. This summary contains rules of thumb and the seeds of best practice when attempting to configure and compare optimization algorithms, specifically in the face of the no-free-lunch theorem. Issues of Benchmarking Methodology Empirically comparing the performance of algorithms on optimization problem instances is a staple for the fields of Heuristics and Biologically Inspired Computation, and the problems of effective comparison methodology have been discussed since the inception of these fields. Johnson suggests that the coding of an algorithm is the easy part of the process; the difficult work is getting meaningful and publishable results [Johnson2002a]. He goes on to provide a very through list of questions to consider before racing algorithms, as well as what he describes as his "pet peeves" within the field of empirical algorithm research. Hooker [Hooker1995] (among others) practically condemns what he refers to as competitive testing of heuristic algorithms, calling it "fundamentally anti-intellectual". He goes on to strongly encourage a rigorous methodology of what he refers to as scientific testing where the aim is to investigate algorithmic behaviors. Barr, Golden et al. [Barr1995] list a number of properties worthy of a heuristic method making a contribution, which can be paraphrased as; efficiency, efficacy, robustness, complexity, impact, generalizability, and innovation. This is interesting given that many (perhaps a majority) of conference papers focus on solution quality alone (one aspect of efficacy). In their classical work on reporting empirical results of heuristics Barr, Golden et al. specify a loose experimental setup methodology with the following steps: They then suggest eight guidelines for reporting results, in summary they are; reproducibility, specify all influential factors (code, computing environment, etc), be precise regarding measures, specify parameters, use statistical experimental design, compare with other methods, reduce variability of results, and ensure results are comprehensive. They then clarify these points with examples. Peer, Engelbrecht et al. [Peer2003] summarize the problems of algorithm benchmarking (with a bias toward particle swarm optimization) to the following points: duplication of effort, insufficient testing, failure to test against state-of-the-art, poor choice of parameters, conflicting results, and invalid statistical inference. Eiben and Jelasity [Eiben2002] sight four problems with the state of benchmarking evolutionary algorithms; 1) test instances are chosen ad hoc from the literature, 2) results are provided without regard to research objectives, 3) scope of generalized performance is generally too broad, and 4) results are hard to reproduce. Gent and Walsh provide a summary of simple dos and don'ts for experimentally analyzing algorithms [Gent1994]. For an excellent introduction to empirical research and experimental design in artificial intelligence see Cohen's book "Empirical Methods for Artificial Intelligence" [Cohen1995]. The theme of the classical works on algorithm testing methodology is that there is a lack of rigor in the field. The following sections will discuss three main problem areas to consider before benchmarking, namely 1) treating algorithms as complex systems that need to be tuned before applied, 2) considerations when selecting problem instances for benchmarking, and 3) the selection of measures of performance and statistical procedures for testing experimental hypotheses. A final section 4) covers additional best practices to consider. Selecting Algorithm Parameters Optimization algorithms are parameterized, although in the majority of cases the effect of adjusting algorithm parameters is not fully understood. This is because unknown non-linear dependencies commonly exist between the variables resulting in the algorithm being considered a complex system. Further, one must be careful when generalizing the performance of parameters across problem instances, problem classes, and domains. Finally, given that algorithm parameters are typically a mixture of real and integer numbers, exhaustively enumerating the parameter space of an algorithm is commonly intractable. There are many solutions to this problem such as self-adaptive parameters, meta-algorithms (for searching for good parameter values), and methods of performing sensitivity analysis over parameter ranges. A good introduction to the parameterization of genetic algorithms is Lobo, Lima et al. [Lobo2007]. The best and self-evident place to start (although often ignored [Eiben2002]) is to investigate the literature and see what parameters been used historically. Although not a robust solution, it may prove to be a useful starting point for further investigation. The traditional approach is to run an algorithm on a large number of test instances and generalize the results [Schaffer1989]. We, as a field, haven't really come much further than this historical methodology other than perhaps the application of more and differing statistical methods to decrease effort and better support findings. A promising area of study involves treating the algorithm as a complex system, where problem instances may become yet another parameter of the model [Saltelli2002] [Campolongo2000]. From here, sensitivity analysis can be performed in conjunction with statistical methods to discover parameters that have the greatest effect [Chan1997] and perhaps generalize model behaviors. Francois and Lavergne [Francois2001] mention the deficiencies of the traditional trial-and-error and experienced-practitioner approaches to parameter tuning, further suggesting that seeking general rules for parameterization will lead to optimization algorithms that offer neither convergent or efficient behaviors. They offer a statistical model for evolutionary algorithms that describes a functional relationship between algorithm parameters and performance. Nannen and Eiben [Nannen2007] [Nannen2006] propose a statistical approach called REVAC (previously Calibration and Relevance Estimation) to estimating the relevance of parameters in a genetic algorithm. Coy, Golden et al. [Coy2001] use a statistical steepest decent method procedure for locating good parameters for metaheuristics on many different combinatorial problem instances. Bartz-Beielstein [Bartz-Beielstein2003] used a statistical experimental design methodology to investigate the parameterization of the Evolutionary Strategy (ES) algorithm. A sequential statistical methodology is proposed by Bartz-Beielstein, Parsopoulos et al. [Bartz-Beielstein2004] for investigating the parameterization and comparisons between the Particle Swarm Optimization (PSO) algorithm, the Nelder-Mead Simplex Algorithm (direct search), and the Quasi-Newton algorithm (derivative-based). Finally, an approach that is popular within the metaheuristic and Ant Colony Optimization (ACO) community is to use automated Monte Carlo and statistical procedures for sampling discretized parameter space of algorithms on benchmark problem instances [Birattari2002]. Similar racing procedures have also been applied to evolutionary algorithms [Yuan2004]. Problem Instances This section focuses on issues related to the selection of function optimization test instances, but the general theme of cautiously selecting problem instances is generally applicable. Common lists of test instances include; De Jong [Jong1975], Fogel [Fogel1995], and Schwefel [Schwefel1995]. Yao, Lui et al. [Yao1999] list many canonical test instances as does Schaffer, Caruana et al. [Schaffer1989]. Gallagher and Yuan [Gallagher2006] review test function generators and propose a tunable mixture of Gaussians test problem generators. Finally, McNish [MacNish2005] proposes using fractal-based test problem generators via a web interface. The division of test problems into classes is another axiom of modern optimization algorithm research, although the issues with this methodology are the taxonomic criterion for problem classes and on the selection of problem instances for classes. Eiben and Jelasity [Eiben2002] strongly support the division of problem instances into categories and encourage the evaluation of optimization algorithms over a large number of test instances. They suggest classes could be English [English1996] suggests that many functions in the field of EC are selected based on structures in the response surface (as demonstrated in the above examples), and that they inherently contain a strong Euclidean bias. The implication is that the algorithms already have some a priori knowledge about the domain built into them and that results are always reported on a restricted problem set. This is a reminder that instances are selected to demonstrate algorithmic behavior, rather than performance. Measures and Statistical Methods There are many ways to measure the performance of an optimization algorithm for a problem instance, although the most common involves a quality (efficacy) measure of solution(s) found (see the following for lists and discussion of common performance measures [Bartz-Beielstein2004] [Birattari2005a] [Hughes2006] [Eiben2002] [Barr1995]). Most biologically inspired optimization algorithms have a stochastic element, typically in their starting position(s) and in the probabilistic decisions made during sampling of the domain. Thus, the performance measurements must be repeated a number of times to account for the stochastic variance, which could also be a measure of comparison between algorithms. Irrespective of the measures used, sound statistical experimental design requires the specification of 1) a null hypothesis (no change), 2) alternative hypotheses (difference, directional difference), and 3) acceptance or rejection criteria for the hypothesis. The null hypothesis is commonly stated as the equality between two or more central tendencies (mean or medians) of a quality measure in a typical case of comparing stochastic-based optimization algorithms on a problem instance. Peer, Engelbrech et al. [Peer2003] and Birattari and Dorigo [Birattari2005a] provide a basic introduction (suitable for an algorithm-practitioner) into the appropriateness of various statistical tests for algorithm comparisons. For a good introduction to statistics and data analysis see Peck et al. [Peck2005], for an introduction to non-parametric methods see Holander and Wolfe [Hollander1999], and for a detailed presentation of parametric and nonparametric methods and their suitability of application see Sheskin [Hughes2006]. For an excellent open source software package for performing statistical analysis on data see the R Project. (R Project is online at http://www.r-project.org) To summarize, parametric statistical methods are used for interval and ratio data (like a real-valued performance measure), and nonparametric methods are used for ordinal, categorical and rank-based data. Interval data is typically converted to ordinal data when salient constraints of desired parametric tests (such as assumed normality of distribution) are broken such that the less powerful nonparametric tests can be used. The use of nonparametric statistical tests may be preferred as some authors [Peer2003] [Chiarandini2005] claim the distribution of cost values are very asymmetric and/or not Gaussian. It is important to remember that most parametric tests degrade gracefully. Chiarandini, Basso et al. [Chiarandini2005] provide an excellent case study for using the permutation test (a nonparametric statistical method) to compare stochastic optimizers by running each algorithm once per problem instance, and multiple times per problem instance. While rigorous, their method appears quite complex and their results are difficult to interpret. Barrett, Marathe et al. [Barrett2003] provide a rigorous example of applying the parametric test Analysis of Variance (ANOVA) of three different heuristic methods on a small sample of scenarios. Reeves and Write [Reeves1995] [Reeves1995a] also provide an example of using ANOVA in their investigation into epistasis on genetic algorithms. In their tutorial on the experimental investigation of heuristic methods, Rardin and Uzsoy [Rardin2001] warn against the use of statistical methods, claiming their rigidity as a problem, and the importance of practical significance over that of statistical significance. They go on in the face of their own objections to provide an example of using ANOVA to analyze the results of an illustrative case study. Finally, Peer, Engelbrech et al. [Peer2003] highlight a number of case study example papers that use statistical methods inappropriately. In their OptiBench system and method, algorithm results are standardized, ranked according to three criteria and compared using the Wilcoxon Rank-Sum test, a non-parametric alternative to the Student-T test that is commonly used. Other Another pervasive problem in the field of optimization is the reproducibility (implementation) of an algorithm. An excellent solution to this problem is making source code available by creating or collaborating with open-source software projects. This behavior may result in implementation standardization, a reduction in the duplication of effort for experimentation and repeatability, and perhaps more experimental accountability [Eiben2002] [Peer2003]. Peer, Engelbrech et al. [Peer2003] stress the need to compare to the state-of-the-art implementations rather than the historic canonical implementations to give a fair and meaningful evaluation of performance. Another area that is often neglected is that of algorithm descriptions, particularly in regard to reproducibility. Pseudocode is often used, although (in most cases) in an inconsistent manner and almost always without reference to a recognized pseudocode standard or mathematical notation. Many examples are a mix of programming languages, English descriptions and mathematical notation, making them difficult to follow, and commonly impossible to implement in software due to incompleteness and ambiguity. An excellent tool for comparing optimization algorithms in terms of their asymptotic behavior from the field of computation complexity is the Big-O notation [Cormen2001]. In addition to clarifying aspects of the algorithm, it provides a problem independent way of characterizing an algorithms space and or time complexity. Summary It is clear that there is no silver bullet to experimental design for empirically evaluating and comparing optimization algorithms, although there are as many methods and options as there are publications on the topic. The field of stochastic optimization has not yet agreed upon general methods of application like the field of data mining (processes such as Knowledge Discovery in Databases (KDD) [Fayyad1996]). Although these processes are not experimental methods for comparing machine learning algorithms, they do provide a general model to encourage the practitioner to consider important issues before application of an approach. Finally, it is worth pointing out a somewhat controversially titled paper by De Jong [Jong1992] that provides a reminder that although the genetic algorithm has been shown to solve function optimization, it is not innately a function optimizer, and function optimization is only a demonstration of this complex adaptive system's ability to learn. It is a reminder to be careful not to link an approach too tightly with a domain, particularly if the domain was chosen for demonstration purposes. Bibliography | | Free CourseGet one algorithm per week... Own A CopyThis 438-page ebook has... | | Please Note: This content was automatically generated from the book content and may contain minor differences. | | Do you like Clever Algorithms? Buy the book now.
http://www.cleveralgorithms.com/nature-inspired/advanced/racing_algorithms.html
Artificial intelligence (AI) can improve the efficiency and effectiveness of treatments in clinical healthcare settings. However, it’s important to remember that algorithms are trained on insufficiently diverse data, which can lead to data bias in AI. With medical centers incorporating more and more technical innovation that incorporates AI, this bias can inadvertently contribute to increasing healthcare disparities. In healthcare, data bias poses serious risks for patients. For this reason, AI algorithms can either deliver on the promise of democratizing healthcare or exacerbate inequalities. And both are happening today. However, the good news is that the application of AI in healthcare is entirely within our control. The application of AI to medicine, such as medical imaging, diagnostics, and surgery, will change the relationship between patients and doctors and is set to improve patient outcomes. Algorithms are already doing the most superficial work for doctors, giving them more time to draw up an individual treatment plan for each patient. But AI can be biased. What is bias in artificial intelligence? People tend to believe in decisions made by computers. People assume that whatever outcome an AI algorithm produces is objective and impartial. However, the output of any AI algorithm is shaped by its input data. When people select the input data for an algorithm, human biases can surface unintentionally. Today’s world is battling systematic bias in mainstream social institutions, and healthcare centers need technologies that will reduce health inequalities rather than exacerbate them. Biases can arise at any stage in the development and deployment of AI. For example, the datasets selected to train an algorithm can introduce bias, as can applying an algorithm in contexts other than those for which it was originally trained. We’ll explore these concepts more in the next section. The most common source of data bias in AI is input information that doesn’t sufficiently represent the target population. This can have adverse effects on the target population. In practice, evidence suggests there is a great deal of bias in technology and AI. Let’s look at four major examples of data bias. One example of racial inequality in the healthcare industry is a study published in The New England Journal of Medicine in 2020. It caused a stir in the medical community by exposing racial bias in pulse oximetry sensors. The authors found that Black patients were significantly more susceptible to hypoxemia than their white counterparts, despite having comparable pulse oximeter readings. The oximeters did not accurately detect low blood oxygenation in Black patients, which could result in reduced oxygen therapy, thus, increased risk for hypoxemia. Given the prevalence of hypoxemia in COVID-19 patients, this research represents a particularly relevant example of data bias. As another example, many skin image analysis algorithms have been trained on images of white patients. Since historically, less money has been spent on Black patients with the same level of needs as their white counterparts, the algorithm erroneously assigned Black patients to the same level of risk as healthier white patients. Although now the algorithms are used much more widely for diagnosis in non-white populations, they can potentially overlook malignant melanomas in people with any other skin color. In the future, AI algorithms that analyze radiological images faster and with more accuracy than humans are expected to increase radiologists’ efficiency and take over some of their responsibilities. But the AI can provide inaccurate analyses due to biased input data. Some diseases manifest differently in women and men, be it cardiovascular disease, diabetes, or mental disorders, such as depression and autism. If algorithms fail to account for sex differences, care inequalities between sexes can be exacerbated. Therefore, AI algorithms need to be trained using datasets drawn from different populations. However, this is not yet happening. Socioeconomic status (SES) affects people’s health and the care they receive. For example, people with lower SES are more likely to have poorer health, lower life expectancy, and a higher incidence of chronic disease. Moreover, fewer diagnostic tests and fewer drugs are available to lower SES populations with chronic disease. This population also has limited access to health care due to the cost of insurance coverage and its lack. Medical practitioners’ implicit bias related to SES leads to inequalities in health care. Data are collected from private clinics, where there are almost no patients with low SES. Thus, cases and possible unique symptoms for patients with lower SES are lost. A team at the University of Toronto used an artificial intelligence algorithm to identify language disorders that may be an early sign of Alzheimer’s disease. This technology should have made diagnosis easier. However, this algorithm was trained with speech samples from Canadian English speakers, and in practice, it turned out to be useful for identifying language disorders only in speakers of Canadian English. This put Canadian French speakers and those using other English dialects at a disadvantage when it came to diagnosing Alzheimer’s disease. AI is capable of understanding human language — but which language? The simple answer is the language or dialect it was taught. Unfortunately, this creates a bias that patients and healthcare providers must guard against. All four examples above show how bias can become ingrained in an algorithm, either due to bias in the selection of research subjects from whom data is collected for training datasets, or due to inappropriate selection of features for the algorithm to be trained on. However, there are also many situations where clinicians themselves introduce bias in algorithms, such as the example of SES above, or by the interaction of clinicians with algorithms. Although it’s clear that AI bias in healthcare is a problem, this problem is difficult to overcome. It is not enough to simply have a dataset that represents the patient population that you plan to analyze with an algorithm. We need to understand that in designing an algorithm, we naturally insert our own way of thinking. If we can select data and train algorithms in a way that actually erases the biases that human thinking can introduce, it is possible to gain greater objectivity through AI. Are there any proven ways to tackle data bias in artificial intelligence? First, we must remember that bias can appear at any stage in the algorithm creation process, from research design, data collection, algorithm design, model selection, and implementation, to the dissemination of results. Thus, combating bias requires that teams working on a given algorithm include professionals with different backgrounds and perspectives, including doctors, and not just data scientists with a technical understanding of AI. The sharing of medical data should become more commonplace. But the sanctity of medical data and the strength of privacy laws create solid incentives for data protection and severe consequences for privacy breaches. There will always be a certain degree of bias because injustice in society affects who can create algorithms and how they are used. Therefore, it will take time to establish normative action and collaboration between government, academia, and civil society. At the same time, we must think about vulnerable groups of people and work to protect them. Today, more healthcare professionals are at least aware of dataset-related bias in AI. Many companies are taking active steps to promote diversity, fairness, and inclusion in their teams. Sometimes even when institutions want to share data, a lack of interoperability between medical record systems remains a significant technical hurdle. If we want the AI of tomorrow to be not only robust but also fair, we must create a technical and regulatory infrastructure that makes the diverse data needed to train AI algorithms available. Healthcare is being transformed by a growing number of data sources that are continuously collected, transmitted, and fed to artificial intelligence systems. For new technologies to be accurate, they must be inclusive and reflect the needs of different populations. Addressing the complex challenges of AI bias will require collaboration between data scientists, healthcare providers, consumers, and regulators. Data bias, information gaps, and a lack of data standards, common metrics, and interoperable structures represent the biggest threats to a transition to equitable AI. Incorporating open science principles into AI development and assessment tools can strengthen the integration of AI in medicine and open up space for different voices to participate in its use in medicine. Postindustria offers a wide range of ML AI services and solutions. Leave your contact details in the form, and we’ll respond to discuss your custom solution. Get actionable insights for your product Thank you for reaching out, User! Make sure to check for details.
https://postindustria.com/data-bias-in-ai-how-to-solve-the-problem-of-possible-data-manipulation/
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. To link to this item DOI: 10.1007/s11269-015-1092-x Abstract/Summary The pipe sizing of water networks via evolutionary algorithms is of great interest because it allows the selection of alternative economical solutions that meet a set of design requirements. However, available evolutionary methods are numerous, and methodologies to compare the performance of these methods beyond obtaining a minimal solution for a given problem are currently lacking. A methodology to compare algorithms based on an efficiency rate (E) is presented here and applied to the pipe-sizing problem of four medium-sized benchmark networks (Hanoi, New York Tunnel, GoYang and R-9 Joao Pessoa). E numerically determines the performance of a given algorithm while also considering the quality of the obtained solution and the required computational effort. From the wide range of available evolutionary algorithms, four algorithms were selected to implement the methodology: a PseudoGenetic Algorithm (PGA), Particle Swarm Optimization (PSO), a Harmony Search and a modified Shuffled Frog Leaping Algorithm (SFLA). After more than 500,000 simulations, a statistical analysis was performed based on the specific parameters each algorithm requires to operate, and finally, E was analyzed for each network and algorithm. The efficiency measure indicated that PGA is the most efficient algorithm for problems of greater complexity and that HS is the most efficient algorithm for less complex problems. However, the main contribution of this work is that the proposed efficiency ratio provides a neutral strategy to compare optimization algorithms and may be useful in the future to select the most appropriate algorithm for different types of optimization problems.
https://centaur.reading.ac.uk/51065/
With machine learning models, explainability is difficult and elusive The enterprise's demand for explainable AI is merited, say experts, but the problem is more complicated than most of us understand and possibly unsolvable. NEW YORK CITY -- The push by enterprises for explainable artificial intelligence is shining a light on one of the... Continue Reading This Article Enjoy this article as well as all of our content, including E-Guides, news, tips and more. problematic aspects of machine learning models. That is, if the models operate in so-called black boxes, they don't give a business visibility into why they've arrived at the recommendations they do. But, according to experts, the enterprise demand for explainable artificial intelligence overlooks a number of characteristics about current applications of AI, including the fact that not all machine learning models require the same level of interpretability. "The importance of interpretability really depends on the downstream application," said Zoubin Ghahramani, professor of information engineering at the University of Cambridge and chief scientist at Uber Technologies Inc., during a press conference at the recent Artificial Intelligence Conference hosted by O'Reilly Media and Intel AI. A machine learning model that automatically captions an image would not need to be held to the same standards as machine learning models that determine how loans should be distributed, he contended. Plus, companies pursuing interpretability may buy into a false impression that achieving such a feat automatically equates to trustworthiness -- when it doesn't. "If we focus only on interpretability, we're missing ... the 15 other risks that [need to be addressed to achieve] real trustworthy AI," said Kathryn Hume, vice president of product and strategy at Integrate.ai, an enterprise AI software startup. And we mistakenly believe that cracking open the black box is the only way to peer inside when there are other methods, such as an outcomes-oriented approach that looks at specific distributions and outcomes, that could also provide "meaningful transparency," she said. The fact that companies tend to oversimplify the problem of achieving trustworthy artificial intelligence, however, doesn't negate the need to make machine learning models easier to interpret, Hume and the other experts gathered at the press conference stressed. "I think there are enough problems where we absolutely need to build transparency into the AI systems," said Tolga Kurtoglu, CEO at Xerox's PARC. Indeed, Ghahramani pointed to examples, such as debugging an AI system or complying with GDPR, where transparency would be very helpful. Instead, the experts suggested that interpretability be seen for what it is: one method of possibly many for building trustworthy artificial intelligence that is, by its very nature, a little murky itself. "When we think about what interpretability might mean and when it's not a technical diagnosis, we're asking, 'Can we say why x input led to y output?'" Hume said. "And that's a causal question imposed upon a correlative system." In other words, usually not answerable in an explicit way. Probabilistic machine learning Deep learning -- arguably the most hotly pursued subsets of machine learning models -- is especially challenging for businesses accustomed to traditional analytics, Ghahramani said. The algorithms require incredible amounts of data and computation, they lack transparency, are poor at representing uncertainty, are difficult to trust and can be easily fooled, he said. Ghahramani illustrated this by showing the audience two images -- a dog and a school bus. Initially, an image recognition algorithm successfully labeled the two images. "But then you add a little bit of pixel noise in a very particular way, and it confidently ... gets it wrong," he said. Indeed, the algorithm classified both images as ostrich. It's a big problem, he said, "and so we really need machine learning systems that know what they don't know." One of the ways to achieve this is a methodology called probabilistic machine learning. Probabilistic machine learning is a way of calculating for uncertainty. It uses Bayes' Rule, a mathematical formula that, very simply, calculates the probability of what's to come based on prior probabilities and the observable data generated by what happened. "The process of going from your prior knowledge before observing the data to your posterior knowledge after observing the data is exactly learning," Ghahramani said. "And what you gain from that is information." The Mindfulness Machine Like most IT conferences, the O'Reilly show also had a marketplace of vendors showing off their wares. But enterprise tech wasn't the only thing on display. The Mindfulness Machine, an art installation by Seb Lee-Delisle, was also there. Originally commissioned by the Science Gallery Dublin for its "Humans Need Not Apply" exhibition, it sat quietly against a wall across from conference rooms where sessions were being held. And it colored. Indeed, the Mindfulness Machine is a robot equipped with sensors that track the surrounding environment, including weather, ambient noise and how many people are watching it. The data establishes a "mood," which then influences the colors the machine uses in the moment. When I walked by on my way to lunch, the Mindfulness Machine was using an earthy brown to fill in a series of wavy lines. It's mood? Melancholy.
https://www.techtarget.com/searchcio/news/252440924/With-machine-learning-models-explainability-is-difficult-and-elusive
Within the UK Military Cyber domain, much is discussed about the topics of non-kinetic effects, Cyber security and cryptography. Rightly so, these subjects draw a lot of attention as they raise the immediate questions of the ethics of automating processes within the ‘kill-chain’ and the protecting of our digitally held information. As important as these subjects are, they appear to be shrouding other highly relevant subjects in the field, namely the use of computers to do what they were created to do, that is computation. The fields of Artificial Intelligence (AI) and Machine Learning (ML) are expanding in the commercial and academic centres with what seems like daily breakthroughs being announced in the media, from automated accountants to driverless cars. This article suggests that we as the military should be looking at how to integrate AI into our processes so as not only to make our decision making immeasurably better, but to guard against the risk of falling behind the inevitable wave of technology that is sweeping our world. Principals The subjects of AI and ML overlap in many ways but can differ slightly in their approaches to problem solving. Broadly speaking, AI may be defined as solving a problem algorithmically where the method of solving the problem is known and so a computer will apply the algorithm to solve said problem much quicker than a human mind would be able to, many times, without tiring. In the case of ML, the way in which the problem is solved is not necessarily known in the first instance. A program is ‘trained’ by being shown the correct solutions to a set of problems and by ‘learning’ how to achieve the correct solution, teaches itself an algorithm so that it may then solve subsequent similar problems. Additionally, any further solutions that are found by this method improve future results just as a human will increase their chances of success in an exam by revising more questions. Examples The following examples will work to highlight just how powerful computation has now become and the benefits that may be granted when they are used to their potential. The most used single page web application to date is Google Maps. Using powerful point to point route finding algorithms like the Djkistra Algorithm it is able to tell a user how long it will take to get from one end of a continent to another in the matter of seconds whether they are travelling by car, bicycle or public transport. A possible application relevant to Air Manoeuvre might be rapid tactical route planning for a C130 low level airdrop mission, where the point of departure and location of an airdrop are known. The program would then select the best possible route to take given any number of factors including distance, time, risk, etc. Another method called natural language processing enables an AI agent to read a host of documents and make inferences about their contents which would take a human doing the same job several orders of magnitude more time than it would the computer. The practicalities of this has already been demonstrated by the ‘lawtech’ firm Linklaters who have created a program called Verifi which can sift through 14 UK and European regulatory registers to check client names for banks and process thousands of names overnight. A junior lawyer would take an average of 12 minutes to search each customer name. A possible application in the military context of this method would be to ‘feed’ an AI agent a host of administrative JSPs to learn so that a service person may submit a routine request that would usually go to their unit admin cell (Expense claim, leave request or an RFI on unit numbers or competencies to name but a few examples) and get a timely, correct answer any time of day. These two examples just scratch the surface of what the power of computation can provide but what is important to note is that we cannot (yet, if ever) expect these so-called AI agents to act totally autonomously. Just like an Air Force with highly technical equipment requires highly technical people, a military with integrated AI processes requires people to understand the systems that they are working with so that they may be monitored, tweaked and improved over time. The benefit of these systems that would require supervision just as a junior technician does is that we need not concern ourselves with the career progression or how many hours a day a computer may work. The computer will not be posted to another unit and take valuable knowledge with it gained over the course of a tour as the junior technician might. The computer will stay to do the job that it was created for, to compute. Arguably most importantly in these times of tight governmental purse strings is that the computer will not require a salary. Of course, it will cost money in the first case to invest in the technology but there exists a crossover where the outlay cost of automation becomes worth it. Summary The range of tasks that may be solved by a computer is limited only by the imagination of the person sat at the keyboard. In the field of AI, if a process may be written down explicitly it may then be translated into pseudo-code, algorithms applied and experimented with and finally written into real code to be used as a program to solve problems and if needed be subsequently tweaked to make the problem solving even more efficient. In the field of ML the process need not even be fully known, all that is required is a suitable solution set that is known to be correct and the program may learn how to solve a problem itself. This is not all to say that these techniques are easy to handle. A whole range of background knowledge in Mathematics and Computer Science is required to make such tools, but once wielded, these tools may well provide computational power unknown to a military any time prior to now and would allow the creation of maintainable efficiencies across the MOD as a whole. The views expressed within individual posts and media are those of the author and do not reflect any official position or that of the author’s employees or employer. Concerns regarding content should be addressed to the wavellroom through the contact form Peter A Peter has been a C-130 pilot for a number of years and has a keen interest in the benefits that modern technology can offer Defence.
https://wavellroom.com/2017/07/22/computational-advantage-defences-missed-opportunities-in-artificial-intelligence/
Abstract: Automated Machine Learning (Auto-ML) is the cousin of Hyperheuristics for Machine Learning. It has become widely popular since the term was coined in 2013, when it was first used to build complete machine learning pipelines - a sequence of steps to solve a particular problem that may include both preprocessing (e.g., feature selection), classification, and postprocessing (e.g., ensemble-like methods). In the past years, the area has turned from searching ML pipelines to searching the architecture of complex neural networks, a field known as Neural Architecture Search (NAS). In both cases, the most popular search methods are mainly based on Bayesian Optimization or Evolutionary Algorithms, while reinforcement learning is also popular for NAS. However, the search space of AutoML problems, in general, is complex, including categorical, discrete, continuous, and conditional variables. This talk presents work that has been done to better understand these search spaces, looking mainly at how to define neighborhoods and generate measures of fitness correlation and neutrality. This is essential to grasp which methods are more promising in different scenarios and develop more appropriate search mechanisms to take advantage of the structure of these spaces. Short Bio: Gisele Pappa is an Associate Professor in the Computer Science Department at UFMG, Brazil. She has served as a GECCO Self-* track co-chair in past editions and has also been responsible for both the tutorials and workshops at GECCO. She is an associate editor of Genetic Programming and Evolvable Machines journal and has an extensive publication record in the intersection of the machine learning and evolutionary computation areas. She has also been actively researching the use of EAs for automated machine learning (AutoML), and currently looks at the search spaces of these algorithms and how they can be effectively explored. Other research interests are in genetic programming and its applications to both classification and regression tasks focusing on applications for health data and also fraud detection. The main objective of this workshop is to discuss hyper-heuristics and algorithm configuration methods for the automated generation and improvement of algorithms, with the goal of producing solutions (algorithms) that are applicable to multiple instances of a problem domain. The areas of application of these methods include optimization, data mining and machine learning. [1-18,23]. Automatically generating and improving algorithms by means of other algorithms has been the goal of several research fields, including Artificial Intelligence in the early 1950s, Genetic Programming since the early 1990s, and more recently automated algorithm configuration and hyper-heuristics . The term hyper-heuristics generally describes meta-heuristics applied to a space of algorithms. While Genetic Programming has most famously been used to this end, other evolutionary algorithms and meta-heuristics have successfully been used to automatically design novel (components of) algorithms. Automated algorithm configuration grew from the necessity of tuning the parameter settings of meta-heuristics and it has produced several powerful (hyper-heuristic) methods capable of designing new algorithms by either selecting components from a flexible algorithmic framework [3,4] or recombining them following a grammar description . Although most evolutionary algorithms are designed to generate specific solutions to a given instance of a problem, one of the defining goals of hyper-heuristics is to produce solutions that solve more generic problems. For instance, while there are many examples of evolutionary algorithms for evolving classification models in data mining and machine learning, a genetic programming hyper-heuristic has been employed to create a generic classification algorithm which in turn generates a specific classification model for any given classification dataset, in any given application domain . In other words, the hyper-heuristic is operating at a higher level of abstraction compared to how most search methodologies are currently employed; i.e., it is searching the space of algorithms as opposed to directly searching in the problem solution space , raising the level of generality of the solutions produced by the hyper-heuristic evolutionary algorithm. In contrast to standard Genetic Programming, which attempts to build programs from scratch from a typically small set of atomic functions, generative hyper-heuristic methods specify an appropriate set of primitives (e.g., algorithmic components) and allow evolution to combine them in novel ways as appropriate for the targeted problem class. While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach as the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence. As meta-heuristics are themselves a type of algorithm, they too can be automatically designed employing hyper-heuristics. For instance, in 2007, Genetic Programming was used to evolve mate selection in evolutionary algorithms ; in 2011, Linear Genetic Programming was used to evolve crossover operators ; more recently, Genetic Programming was used to evolve complete black-box search algorithms [13,14,16], SAT solvers , and FuzzyART category functions . Moreover, hyper-heuristics may be applied before deploying an algorithm (offline) or while problems are being solved (online) , or even continuously learn by solving new problems (life-long) . Offline and life-long hyper-heuristics are particularly useful for real-world problem solving where one can afford a large amount of a priori computational time to subsequently solve many problem instances drawn from a specified problem domain, thus amortizing the a priori computational time over repeated problem solving. Recently, the design of Multi-Objective Evolutionary Algorithm components was automated . Very little is known yet about the foundations of hyper-heuristics, such as the impact of the meta-heuristic exploring algorithm space on the performance of the thus automatically designed algorithm. An initial study compared the performance of algorithms generated by hyper-heuristics powered by five major types of Genetic Programming . Another avenue for research is investigating the potential performance improvements obtained through the use of asynchronous parallel evolution to exploit the typical large variation in fitness evaluation times when executing hyper-heuristics . | | E-mail: [email protected]. López-Ibáñez is Senior Distinguished Researcher at the University of Málaga (Spain) and a Senior Lecturer (Associate Professor) in the Decision and Cognitive Sciences Research Centre at the Alliance Manchester Business School, University of Manchester, UK. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 27 journal papers, 9 book chapters and 48 papers in peer-reviewed proceedings of international conferences on diverse areas such as evolutionary algorithms, ant colony optimization, multi-objective optimization, pump scheduling and various combinatorial optimization problems. His current research interests are experimental analysis and automatic design of stochastic optimization algorithms, for single and multi-objective optimization. He is the lead developer and current maintainer of the irace software package (http://iridia.ulb.ac.be/irace). | | E-mail: [email protected] Daniel R. Tauritz is an Associate Professor in the Department of Computer Science and Software Engineering at Auburn University (AU), Interim Director and Chief Cyber AI Strategist of the Auburn Cyber Research Center, the founding Head of AU's Biomimetic Artificial Intelligence Research Group (BioAI Group), a cyber consultant for Sandia National Laboratories, a Guest Scientist at Los Alamos National Laboratory (LANL), and founding academic director of the LANL/AU Cyber Security Sciences Institute (CSSI). He received his Ph.D. in 2002 from Leiden University. His research interests include the design of generative hyper-heuristics and self-configuring evolutionary algorithms and the application of computational intelligence techniques in cyber security, critical infrastructure protection, and program understanding. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members. | | E-mail: [email protected] R. Woodward is a Lecturer in at the Queen Mary University of London. Formerly he was a lecturer at the University of Stirling, within the CHORDS group and was employed on the DAASE project. Before that he was a lecturer for four years at the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 50 talks at international conferences and as an invited speaker at universities. He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities.
https://bonsai.auburn.edu/ecada/GECCO2021/
Machine learning is a scientific discipline that uses many different algorithms to build models. They help to build smart software systems for medical diagnosis, expenditure optimization, and more. The reason why there are so many different algorithms is that they operate the best when applied to different problems. This phenomenon is also called the No Free Lunch theorem, which means that no algorithms can solve perfectly any given problem. In this post, we will have a look at the most popular groups of algorithms and see what problems they help to solve. Classification and clustering algorithms Imagine there are many objects, for example, photos of different fruits that need to be divided into classes. The program is given a finite set of classes and a number of examples for each one. This set is called a training sample. By processing them, the program learns about the different fruits, can recognize and place them in the correct group, for example, distinguish between an apple and banana. To learn more about how machine learning uses classification algorithms, read the article written by machine learning experts. In machine learning, the classification task belongs to the supervised learning section. Logistic Regression, Naïve Bayes, Stochastic Gradient Descent, k-Nearest Neighbours, decision trees, random forests, and support vector machines are all examples of classification algorithms. There is also unsupervised learning when the division of objects in the training sample into classes is not specified, and it is required to classify objects only on the basis of their similarity to each other. This type of classification is called clustering. K-Means, Mean-Shift, and DBSCAN are used for clustering. Regression Simple linear regression is used to model the relationship between the two events. Usually, they are numerical variables. Don’t confuse linear with logistic regression (which is a classification algorithm). For example, linear regression can be used to predict how the number of square meters in the flat affects its price: usually the bigger the place, the more it costs. Neural networks Neural networks are based on the mathematical model, which is somewhat reminiscent of the functioning of our nervous system. We have neurons that form the nervous system. Neural networks have a similar structure. Each neuron is a node of an interconnected system that gets some data as inputs and produces an output. The way how numerous incoming signals are formed into outgoing signals is determined by the calculation algorithm. Being organized in a large system, neurons are capable of performing very complex tasks of collecting information, analyzing it, and creating new data. These are just some examples of the algorithms that are used in machine learning, however, there are many more. The choice of the algorithm depends on the problem you’re trying to solve, as well as on the resources and skills that you have ― building a neural network is much more time and resource-intensive than building a Naive Bayes classifier.
https://www.teslasautobiography.com/everything-you-need-to-know-about-popular-machine-learning-algorithms.html
Networks of thousands of sensors present a feasible and economic solution to some of our most challenging problems, such as real-time traffic modeling, weather and environmental monitoring, and military sensing and tracking. Recent advances in sensor technology have made possible the development of relatively low cost and low-energy-consumption micro sensors, which can be integrated in a wireless sensor network. These devices - Wireless Integrated Network Sensors (WINS) - will enable fundamental changes in applications spanning the home, office, clinic, factory, vehicle, metropolitan area, and the global environment. Concerning the needs of the user for knowledge discovery from sensor streams in these application domains, new data warehousing, data mining techniques have to be developed to extract meaningful, useful and understandable patterns for the end users to perform data analysis. Many research projects have been conducted by different organizations regarding wireless sensor networks; however, few of them discuss the sensor stream processing infrastructure, and the data warehousing and data mining issues need to be addressed in the sensor network application domains. There is a need for new methodologies in order to extract interesting patterns in a sensor stream application domain. Since the semantics of sensor stream data is application dependent, the extraction of interesting, novel, and useful patterns from stream data applications becomes domain dependent. Some data warehousing and data mining methods have been recently proposed to mine stream data, for example in (Manku 2002, Chang 2003, Li 2004, Yang 2004, Yu 2004, Dang 2007), the authors proposed algorithms to find frequent patterns over the entire history of data streams. In (Giannella 2003, Chang 2004, Lin 2005, Koh 2006, Mozafari 2008), the authors use different sliding window models to find recently frequent patterns in data streams. These algorithms focus on mining frequent patterns with one scan over the entire data stream. In (Chi, 2004), Chi et al considers the problem of mining closed frequent itemsets over a data stream sliding window in the Moment algorithm, and in (Li, 2006), the authors proposed the NewMoment algorithm which uses a bit-sequence representation of items to reduce the time and memory needed. The CFI-Stream algorithm in (Jiang, 2006) directly computes the closed itemses online and incrementally without the help of any support information. In (Li, 2008), Li et al proposed to improve the CFI-stream algorithm with bitmap coding named CLIMB (Closed Itemset Mining with Bitmap) over data stream’s sliding window to reduce the memory cost. Besides pattern mining in data stream applications, as the number of data streaming applications grows, there is also an increasing need to perform association mining in data streams. One example application is to estimate missing data in sensor networks (Halatchev, 2005). Another example application is to predict frequency of Internet packet streams (Demaine, 2002). In the MAIDS project (Cai, 2004), an association mining technique is used to find alarming incidents from data streams. Association mining can also be applied to monitor manufacturing flows (Kargupta, 2004) to predict failures or generate reports based on accumulated web log streams. In (Yang, 2004), (Halatchev, 2005), and (Shin, 2007), the authors proposed using two, three, and multiple frequent pattern based methods to perform association rule mining. In general, these approaches have focused on mining patterns and associations in data streams, without considering an application domain. As a consequence, these methods tend to discover general patterns, which for specific applications can be useless and uninteresting. Stream patterns are usually extracted based on the concept of pattern frequency. With no semantic or domain information, the discovered patterns cannot be applied directly to a specific domain. In this book chapter, we present a data warehousing and mining framework where the users give to the data the semantics that is relevant for the application, and therefore the discovered patterns will refer to a specific domain. We will also discuss the issues needed to be considered in the data warehousing and mining components of this framework for sensor stream applications.
https://www.igi-global.com/chapter/framework-data-warehousing-mining-sensor/38221
This question is a follow-up of sorts to my earlier question on academic social science and machine learning. Machine learning algorithms are used for a wide range of prediction tasks, including binary (yes/no) prediction and prediction of continuous variables. For binary prediction, common models include logistic regression, support vector machines, neural networks, and decision trees and forests. Now, I do know that methods such as linear and logistic regression, and other regression-type techniques, are used extensively in science and social science research. Some of this research looks at the coefficients of such a model and then re-interprets them. I'm interesting in examples where knowledge of the insides of other machine learning techniques (i.e., knowledge of the parameters for which the models perform well) has helped provide insights that are of direct human value, or perhaps even directly improved unaided human ability. In my earlier post, I linked to an example (courtesy Sebastian Kwiatkowski) where the results of naive Bayes and SVM classifiers for hotel reviews could be translated into human-understandable terms (namely, reviews that mentioned physical aspects of the hotel, such as "small bedroom", were more likely to be truthful than reviews that talked about the reasons for the visit or the company that sponsored the visit). PS: Here's a very quick description of how these supervised learning algorithms work. We first postulate a functional form that describes how the output depends on the input. For instance, the functional form in the case of logistic regression outputs the probability as the logistic function applied to a linear combination of the inputs (features). The functional form has a number of unknown parameters. Specific values of the parameters give specific functions that can be used to make predictions. Our goal is to find the parameter values. We use a huge amount of labeled training data, plus a cost function (which itself typically arises from a statistical model for the nature of the error distribution) to find the parameter values. In the crudest form, this is purely a multivariable calculus optimization problem: choose parameters so that the total error function between the predicted function values and the observed function values is as small as possible. There are a few complications that need to be addressed to get to working algorithms. So what makes machine learning problems hard? There are a few choice points: - Feature selection: Figuring out the inputs (features) to use in predicting the outputs. - Selection of the functional form model - Selection of the cost function (error function) - Selection of the algorithmic approach used to optimize the cost function, addressing the issue of overfitting through appropriate methods such as regularization and early stopping. Of these steps, (1) is really the only step that is somewhat customized by domain, but even here, when we have enough data, it's more common to just throw in lots of features and see which ones actually help with prediction (in a regression model, the features that have predictive power will have nonzero coefficients in front of them, and removing them will increase the overall error of the model). (2) and (3) are mostly standardized, with our choice really being between a small number of differently flavored models (logistic regression, neural networks, etc.). (4) is the part where much of the machine learning research is concentrated: figuring out newer and better algorithms to find (approximate) solutions to the optimization problems for particular mathematical structures of the data.
https://www.lesswrong.com/posts/kAARsr6BwZtGbMDvA/question-looking-for-insights-from-machine-learning-that
Short description: Algorithm design and analysis, fundamental data structures, problem solving methodologies, modular programming in C under Unix. Course Level: Basic Course page: http://- Overview When developing software you will have to solve variations of well known problems, which usually have known solutions that are very well implemented. You will also have to solve new problems, and often they can be addressed using well-known problem solving techniques such as recursion or dynamic programming. More often than not, there will be more than one way to solve a particular problem, and several data structures and algorithms that can be leveraged for implementing the solution. Choosing the right combination for a given application can be tricky. So, an important ingredient to practical problem solving is to understand computational complexity in order to evaluate solution candidates in the context of a given application domain. This course will equip you with a repertoire of basic data structures and algorithms to address daily software development tasks. It will also present common problem solving techniques, and you get to practice these techniques in exercises and mini-projects.
https://wiki.hh.se/caisr/index.php/Algorithms_Data_Structures_and_Problem_Solving_(7.5_credits)
Teacher retention has gained nationwide attention in an attempt to combat the nation's teacher shortage. Although teacher candidate retention is widely recognized as a focus area for decreasing turnover rates, it is still a topic that requires further research into the complexities and interdependencies that impact candidates’ success and willingness to stay in the profession. Dr. TaQuana Williams, Academics Director at The New Teacher Project, has recently completed research aimed to improve the retention of novice teacher candidates, with consideration to their experiences within their preparation pathways. Dr. Williams’ research follows her study in Urban Education and Leadership at John Hopkins University and her time dedicated to teacher educator development in urban school systems. In a recent conversation with Dr. Williams, she described her research and its impact on the education landscape. Mia O’Suji: Your research on novice teachers is much needed. Tell us about your dissertation topic? Dr. Williams: My dissertation centers around novice teacher retention. Specifically, I focus on novice teachers in a southern metropolitan city who are enrolled in an alternative certification program and are participants in the United States Teacher Academy (a pseudonym used for the purpose of my study). USTA is a non-profit organization that recruits and trains novice teachers. Members commit to teaching two-years in a low-income community while receiving ongoing professional development in exchange for an education grant. Throughout the literature review and needs assessment for my study, I found research and evidence to indicate that the experience of a novice teacher actually correlates with the role stressors that are associated with occupational stress (Olson & Osborne, 1991). For example, novice teachers search for an understanding of their roles and pedological understanding of the work (Olson & Osborne, 1991). This is an example of role ambiguity, or lack of clear set of role expectations, which is a role stressor that over time leads to burnout (Anbazhagan & Rajan, 2013). Additionally, novice teachers feel apprehension about their abilities to accomplish tasks associated with their roles (Olson & Osborne, 1991) because they are expected to reach the same outcomes as their veteran peers (Lortie, 2002). This is another role stressor, called job difficulty, where an individual has difficulty performing their job due to lack of training, knowledge, or experience (Anbazhagan & Rajan, 2013). Understanding that a major cause in novice teacher attrition is occupational stress and burnout, I sought to study the impact of an intervention that targeted decreasing stress. I found a practice called Balint groups. Balint groups are a community of practice originally designed for general practitioners to combat burnout, occupational stress, and retention. Research in other countries have shown effectiveness of Balint groups with teachers and other professions. During a typical Balint group, a Balint group leader, who is typically a psychologist, facilitates the group through a collective inquiry protocol. During the protocol, a participant shares an interpersonal problem they are facing in their role. Following, members of the group pose questions to the participant who shares. The question process helps to establish the question or problem that must be explored. Next, participants talk through other perspectives the individual may not see or could be ignoring while the participant who shared is silent. Finally, we end with the sharer summarizing what they heard and sharing a commitment they will make to resolve their issue. As a part of my study, participants engaged in monthly virtual Balint group meetings for six months during the 2020-21 academic year. A certified American Balint Society psychologist facilitated these Balint group meetings, and they have served as a Balint group leader since 1993. A former USTA alumnus, who currently works as a general psychiatrist, served as the co-facilitator. The purpose of this study was to evaluate the use of a Balint group model to influence novice teacher self-efficacy, experience with burnout, and intentions to persist in the classroom. Mia O’Suji: You mentioned a variety of factors related to novice teachers’ experiences. How did your experience shape your approach to this topic? Dr. Williams: Novice teacher retention is a passion of mine. I taught in an urban charter school. Following that experience, I worked with novice teachers as a coach. In both settings, I witnessed the revolving door of new faces in our urban schools and the impact that has on students. It really inspired me to want to research the factors that influence novice teacher retention and the overall novice teacher experience further. I discovered that it was not just a problem in my own context, but in the field. Approximately 41% of teachers leave the classroom within the first five years of their role (Alliance for Excellent Education, 2014). This attrition problem has been coined the “Revolving Door Effect,” where novice teachers are hired, usually in difficult contexts with limited resources, they are pressured to achieve strong results, but ultimately leave the profession only to be replaced with another novice teacher and continue the cycle (Ingersoll, 2004).The turnover rate is higher in low-income communities than affluent ones, disproportionately affecting low-income communities of color (R. Ingersoll & Merrill, 2012). So the problem became more than a problem of quality for me, as the larger issue was equity for communities like my own. Mia O’Suji: Knowing the impact that teacher turnover has on low-income communities of color, what were your findings? Dr. Williams: My study was a mixed method study, using both qualitative data from focus groups and open-ended questions and quantitative data from a pre- and post-test. I found that participants' trust in one another increased throughout the intervention. They felt that the Balint group space was a non-judgemental climate where they were able to authentically share their experience with others who could understand. Ultimately, they felt a part of a community that could affirm their experiences while also pushing them to see problems beyond their own perspectives. Participants shared that through the Balint group they were able to deepen their understanding of others’ perspectives and this increased their abilities to build strong relationships with students and colleagues. This was a major benefit of the intervention as the ability to build strong relationships was a strength of this study. Furthermore, positive relationships correlated to a decrease in occupational stress. Due to the fact that we were in a pandemic, where relationships were weakened by the impact of work-from-home orders, this was even more of a highlight. In addition to participant responsiveness, my study centered on three outcome constructs--change in burnout, change in self-efficacy, and change classroom persistence intentions. Quantitative data analysis revealed no significant change in any of the three constructs; however, the qualitative data revealed that participants' self-efficacy on their instructional strategies and ability to build meaningful relationships increased. Additionally, participants all wanted to persist in the classroom for the following year and all wanted to pursue roles in education as their lifetime careers. One participant shifted from wanting a role outside of the classroom to wanting to be a lifetime teacher. They contributed the change to her participation in the intervention. For the final construct, burnout, participants shared that the pandemic exacerbated administrative and systemic pressures, leading to more occupational stress. Overall, participant perception of the intervention was positive, and many expressed interests in participating in the experience for the 2021-22 academic year. The quantitative data also suggest that longer, more sustained participation would lead to increased results. Additionally, participants expressed a desire for in-person meetings over the virtual platform. Mia O’Suji: How might these findings transform the field of education? Dr. Williams: While the findings had positive outcomes, I think it sparked additional research that should be explored. Specifically, the findings suggest further research on supports for novice teachers to feel their role requirements are sustainable. In focus groups, participants shared that they do not feel like their roles were sustainable, especially during the pandemic. They revealed that schools need more counselors and on-site support like therapists. They also stated that they did not feel valued nor recognized from their administration nor society as a whole. Unsurprisingly, there is previous research that indicates both of these factors influence teacher attrition. However, I have not found research on interventions or structures that can shift these perspectives. I’m curious to see studies around the increase of counseling support and wrap-around services in a district on the district’s staff retention and the staff’s experience with burnout. Likewise, I’m interested in seeing how regular praise from an individual’s administrator influences their experience with stress, self-efficacy, and ultimately persistence. Mia O’Suji: What advice might you give to educator preparation programs looking to adjust their support of teacher candidates? Dr. Williams: The study's findings indicate that Balint groups provide opportunities for educators to strengthen their relationships with students and build community with their peers. Thinking about occupational stress, job satisfaction, and even classroom culture, having strong relationships with your students and colleagues is pivotal. The results of the short intervention suggest that PK-12 school districts and educator preparation programs should implement Balint group structures as a part of their novice teacher programming. Participants desired to expand the intervention beyond novice teachers to include their veteran peers and even administrators. They shared that they lacked the perspectives of these groups, so including individuals in these roles would allow the group to include additional voices and expand the community of practice. However, if you were to develop heterogeneous Balint groups, I would suggest that individuals do not engage in Balint groups with their own administrators or supervisors. This will prevent individuals from being transparent and trusting. Instead, I recommend administrators or supervisors engage in Balint groups with other leaders, occupying the same role, across the district. Mia O'Suji serves as the Director of Content Development and Programming at CTAPP. She leads organizational efforts related to teacher preparation programming and strategic project planning.
https://www.ctapptx.org/post/the-impact-of-teacher-preparation-novice-teacher-candidate-retention
The overall objective of our research program is to understand the dynamics of forest communities using North Carolina Piedmont forests as a model system. The overall objective of this research proposal is to make it possible for us to maintain and, where necessary, expand long-term observations that will help us and other workers to achieve this better understanding of forest dynamics. A basic premise of our work is that much of forest dynamics and succession can best be understood as a consequence of the population dynamics of the dominant tree species, an approach first articulated in Peet and Christensen (1980) and more recently fully elaborated and documented in Peet (1992). The slow growth of forest trees greatly limits opportunities to document tree dynamics over the full period of stand development, and thus greatly limits our ability to investigate the population processes that underlie succession and community dynamics. The present proposal is designed to continue and expand efforts needed to build a database adequate for such population-based studies of forest dynamics. This material is based upon work supported by the National Science Foundation under Grant Nos. BSR-8905926, BSR-9107357, DEB97-07551, & DEB97-07664. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. To see our final 2002 report to NSF, click here. To see our 2011 proposal for further support of this work, click here. We are happy to share data collected for this project and to collaborate in its interpretation. Requests for data access and proposals for collaboration should be directed to Robert Peet ([email protected]) at the University or North Carolina or Dean Urban ([email protected]) at Duke University.
http://labs.bio.unc.edu/Peet/PEL/df.htm
For a number of decades now the study of children's memory development, with few exceptions, has been synonymous with the development of pro cesses that lead to the initial encoding and immediate retention of informa tion. Although there is little doubt that the study of such acquisition pro cesses is central to understanding memory development, the long-term retention of previously encoded information represents at least as important a component of children's memory. Indeed, as both students of memory development and educators, our interest is in the maintenance and utiliza tion of knowledge over considerable periods of time, not just in the immedi ate (e. g. , classroom) context. Clearly, then, without an understanding of how recently acquired information is maintained in memory over extended periods of time, our theories of long-term memory development remain incomplete at best. Although children's forgetting and reminiscence was a topic of inquiry early in this century, it is only recently, due in part to the current controversy concerning the reliability of children's eyewitness testimony, that the study of long-term retention has resurfaced in the scientific literature. The purpose of this volume is to draw together some of the principals involved in this resurgence to summarize their recent research programs, present new and previously unpublished findings from their labs, and outline the issues they believe are important in the study of children's long-term retention.
https://rd.springer.com/book/10.1007%2F978-1-4612-2868-4?error=cookies_not_supported&code=7509b7a3-7493-4f0e-9f9f-6b25eb364861
Developmental delay (DD) with a prevalence of about 15 percent of all children is one of the most frequent disorders in early childhood .Children with DD may suffer from a variety of impairments that are likely to develop into multiple chronic and life-long conditions such as intellectual disability, speech problems, social-communicative deficits, sensory impairments and behavioral and emotional disorders. To avoid or minimize late effects during school or even later in professional life, early intervention (EI) plays an important role. Long-term studies have shown that children with DD benefit from EI programs [7, 8] but they are costly and need to be very well coordinated and largely available to also reach the more vulnerable families. Even in high income countries (such as Switzerland), many children who begin life with disadvantages do not receive the care necessary for their optimal development . Although numerous studies around the world have analyzed the structure, the usage and the effectiveness of the care system for children with DD, there is a large paucity of information about supply, demand and effectiveness of the services for these children in Switzerland. From 2017 to 2021, the SNF funded a study at the University Children`s Hospital in Zurich on children with DD between 0-5 years in the context of the National Research Program (NRP) 74 “Smarter Health Care” (http://www.nfp74.ch/en/projects/healthcare-across-sectors/project-jenni). The study showed, that the rates of children referred for early interventions for global developmental delay, or language acquisition delay were significantly below expectations, that the utilization of the recommended EI measures was suboptimal, and that numerous families refuse to enroll for EI even if it is recommended . On the other hand, the study revealed a very high rating and appreciation of the EI measures by the families which received the support . In Geneva too, clinical experience shows, that many children with DD justifying EI support are also detected or referred later than clinically recommendable. Moreover, when help is requested, it is not always available as stated in these recent newspaper articles on the lack of therapist for EI in Geneva and Zürich https://www.heidi.news/apprendre-travailler/a-geneve-1700-enfants-coinces-dans-la-jungle-de-la-logopedie, https://www.tagesanzeiger.ch/kinder-warten-bis-zu-einem-jahr-auf-einen-therapieplatz-565237936694. Thus, it is safe to state, that in both cantons, many children in need of EI do not benefit from early detection and support. In Zürich and Geneva, formal and structural prerequisites, strategies, and professional background of involved professionals to identify and support children with DD early on are very different. In both cantons early detection and follow-up of children at risk at birth for developmental disorders is ensured by our respective Institutions, namely the Service du Développement et de la Croissance of the HUG and the Child Development Center at the Children`s University Hospital in Zurich. With respect to initiating EI, different structures and various professionals and organizations are in charge (a list can be provided upon request). In the Canton Zurich, children with suspected DD are referred for evaluation to a multidisciplinary team (collaborating in the Unit of Special Needs Education, USNE) which decides about the necessary diagnostic procedures and interventions. After children are assigned to one therapy, the responsibility goes over to the family and the therapist without any further coordination by the USNE. Also, the USNE are only responsible for evaluations until kindergarten entry. In Geneva, children with suspected DD, if not followed by the university hospital because of known risk factors at birth, are either directed to one of the evaluation Centers (at the university hospital or the Fondation Pôle Autisme) for evaluation by a multidisciplinary team or directly addressed to a therapist. The responsibility goes to the parents to contact the therapists and coordinate therapies when more than one is needed. Special need education, speech therapy and psychomotricity have to be approved (on the basis of a written report) by the state service in charge of special education: https://www.ge.ch/organisation/service-pedagogie-specialisee; occupational therapy and physiotherapy do not depend upon this organism. A database exists for the children identified at birth, and a separated database exist for children at school age needing special education. Both in Geneva and Zurich, efforts are being made to anticipate and respond to the EI needs of children at risk or with DD, but to date, no evaluation of the strengths and weaknesses of the two different systems has been performed. As stated by Maureen Black “Coordination, monitoring, and evaluation are needed across sectors to ensure that high quality early childhood development services are available throughout early childhood and primary school, up to the age of 8 years” The general aim of our project is to improve the detection and EI of children at risk or with a DD in Geneva, Zürich and nationwide. More specifically, the first objective of our project is to compare the different systems of care and resources available in the canton of Geneva and Zurich, which aim at identifying young children at risk or with DD, to assess their needs and - if needed - initiate EI. The second objective is to propose improvements in data collection and communication between services on children in needs of EI, either based on what is in place in one of the cantons and has proven to be successful, or based on a lack in both cantons. The third objective is to develop a best-practice proposal that can be used as a model, which could be applied in other cantons or at the Swiss level to optimally identify children at risk of DD and to anticipate and meet their needs. “As a society, we cannot wait for young people to reach adulthood or school age to invest in their development; intervention would be too late. Investing in early childhood development means increasing human capital and economic growth.” James J. Heckman, Nobel Laureate in Economic Sciences. Problem and Aim The accessibility and scope of publicly available data resulting from the growing digitalization of society led to unprecedented opportunities and challenges of public data reuse for researchers. In bioethical and social science research, such data is used to understand public sentiment about national and global issues such as vaccination hesitancy, the use of CRISPR-Cas9, the spread of fake news, public opinion about public health measures, or other political action (1,2). Despite the value of digital methods, that are “the use of online and digital technologies to collect and analyse research data” (3) and the availability of public digital data for the research community, many ethical questions that these new opportunities pose have yet to be adequately addressed: a) citizens providing data in the public domain as, for example, on Twitter often do not know that their provided text might be used for research purposes; b) citizens are not able to consent to this type of research except by accepting the terms and conditions of a webpage or online application (which are often not read); and c) digital methods can disconnect the research community from the society, which can stress their relationship. A response to these issues could be the development of a social contract between the research community and the public, to define under which conditions digital research methods enjoy public trust and legitimacy. Public trust in science and public legitimacy, as an effect of public trust, are key criteria for research (4,5). We understand public trust to be a concept that grows in the public sphere from open public discourse and as a result legitimises research action. Public trust is established in anticipation of a net-benefit for the public as well as the research community (6). Following this understanding of public trust, we hypothesise that if the public trusts and understands digital methods, such public trust legitimises scientists using digital methods. In response, we aim to develop a public legitimacy framework for digital methods through a participatory and inclusive research method. To this aim the objectives are: The proposed project will build on a funded seed-project at the Digital Society Initiative led by Felix Gille. Since concepts of trust are culture-specific, it is important to expand our efforts to other language regions. The joint UZH-UNIGE call provides an excellent opportunity to leverage the ongoing efforts, to create synergies, and to assess the generalizability of initial findings from the German-speaking area across Switzerland. The funded project runs from January 2022 for 12 months. Funded with 10.000 CHF, the project allows us to run four citizen fora in the Kanton of Zurich. With the funding of the UZH-UNIGE call, the aim is to establish a long-term work relationship to continue to engage with the Swiss public on matters of legitimacy, public trust and ethics applied to digital society more broadly. This continuation is important, as we anticipate that with the advancement of digital methods and digital society, we will need to discuss on a continuous basis the public-research relationship. Methods We propose to run 5 one-day in person public deliberation fora in Switzerland and to initiate ongoing public online fora on healthcare related issues on the Forum for Global Health Ethics (7). The public deliberation method with the use of citizen fora allows ordinary citizens to give their voice in, for example, policy or other political decision processes (8,9). We will recruit about 20 citizen participants for each forum via our own existing networks, sport clubs, primary schools and social clubs. Each forum day will consist of different activities facilitated by four researchers and will last for about six to seven hours, that includes breaks and catering. If COVID-19 measures will not allow us to meet in person, we will transfer the format to an online deliberation. As we plan the fora in the summer season of 2022 and spring season of 2023, we are hopeful that we can facilitate the workshops in person. A lottery for vouchers will be an incentive for participants to take part in the citizen fora. We will frame the fora with a carefully selected range of case studies (see the exemplary list below). The case studies will be used to stimulate participants’ discussions (10) as they all employ automated digital research methods and make use of publicly available data. Importantly, they will differ in terms of a) context, e.g. healthcare, cyber security or political science; b) involved actors, e.g. universities, public or private actors; c) data type, e.g. tweets of the general public, tweets of professionals, sensitive data from dating platforms, or policy documents; and d) purpose, e.g. using digital methods for prediction purposes or for better understanding a disease. This spread of variables allows to understand if public views differ depending on who is using digital methods, in what context and with what data type. In addition, we will collect descriptive data about the participants. We will record the citizen fora and synthesis the results with active involvement of citizens. After half of the citizen fora are completed, we aim to re-evaluate the methods and potentially refine the case studies. This will allow us to test and validate first findings. We will seek ethical approval at the University of Zurich/University of Geneva. Examples of case studies: • CRISPR-Cas9 (Sentiment analysis of Tweets / Hashtag analysis / Thematic probing) • Visual risk communication of Covid-19 (Qualitative coding of Tweets) • Vaccine hesitancy (Qualitative coding of Tweets) • Fake News (Qualitative coding of Tweets) • Selects Medienstudie 2019 (Sentiment and thematic analysis of Tweets) • Evolution of currency exchanges in underground networks (Tracking of capital in Darknet) • Policy analysis (Thematic analysis) • Dating apps (Thematic analysis) The COVID-19 pandemic poses a serious challenge to individuals’ mental health. In particular, the COVID-19 pandemic bears all features of stress, which is classically described as a response to something novel, unpredictable and uncontrollable . A significant following wave of psychological disturbances and mental disorders triggered by the pandemic is still expected to arise and will confront us within the next years . In this societal context, a better understanding of factors that underlie resilience to maintain mental health are urgently needed. Self-efficacy, i.e. the perception of having the capacity to cope with adverse events, is a key factor underlying healthy functioning and emotional well-being. How self-efficacy may be related in maintaining one’s mental health in the context of the current pandemic is the main objective of this proposal. The labs of Profs Rimmele (University of Geneva) and Kleim (University of Zürich), who are together applying for this grant, both have longstanding experience in emotion, stress and memory research, both in the lab as well as in the clinic. For the current proposal, we plan to combine basic research on the psychology and neuroscience of emotion, stress and memory (Prof. Ulrike Rimmele, Emotion and Memory Lab, Faculty of Psychology and Education Sciences, UniGE) with clinical research on resilience (Prof. Birgit Kleim, Experimental Psychopathology and Psychotherapy Lab, Dept. of Psychology/ Psychiatry UZH). More specifically, this proposal will benefit from combining the recent developments in our laboratories, i.e. advanced emotional memory paradigms and emotion regulation strategies [3, 4] with an innovative intervention of self-efficacy on coping, memory and mental health [5, 6]. We further aim to make our collaboration international by additionally collaborating with Prof. Adam Brown (New School, New York), who brings to the project his research focus on global mental health [7, 8] and self-efficacy [9-11]. Profs. Brown and Kleim Birgit’s labs have already established a strong collaboration, which includes a history of co-publishing and international educational exchanges. At the University of Geneva, this project will be embedded within the Center of Affective Sciences (CISA), the Faculty of Psychology and Education Sciences (FPSE) as well as the Center for the Interdisciplinary Study of Gerontology and Vulnerabilities (CIGEV). The CISA as a world- leading center on emotion, the CIGEV as a center studying vulnerability together with the Psychology Dept. of UniGE and UZH and the Psychiatry Dept. at UZH present the best possible environment to conduct the present proposal. We propose the following three studies: First, we will conduct a large-scale web-based study in Swiss citizens, possibly worldwide, on self-efficacy and self-efficacy (SE) autobiographical memories in the pandemic (Study 1). This study will extend previous research on self-efficacy and coping during the pandemic (e.g. Ritchie et al. ) by focusing on unique content of self-efficacy autobiographical memories. We will thus collect specific information on what self-efficacy memories individuals have experienced during the pandemic and how these are related to individuals’ mental health and coping with the pandemic. Second, in a lab study, we will examine how a self-efficacy intervention may affect coping with a stressor and influence memory under stressful conditions in healthy individuals (Study 2). Third, moving to the clinical arena, we will in a clinical feasibility study investigate whether a SE autobiographical memory training can be helpful to patients who are most affected by severe trauma and symptoms of posttraumatic stress disorder (Study 3). In the clinical feasibility study, we will build on our own pilot results and investigate feasibility and effects of a face- to face intervention with an app-based training of recalling and vividly imagining self-efficacy memories and their effects on emotion, stress experience and psychological symptoms. All three studies build on a unique work already conducted in both labs and on joint conceptualizations, which will now be effectively combined. At UniGE and UZH, we will include junior investigators and combine our ideas and results with teaching and public science talks in order to disseminate our findings. The studies will be presented on the laboratories’ website, which will also provide useful links to publications and resources relevant for this project. These websites will also be used to disseminate findings to the wider public and research community. We also aim to publish the findings of the studies international peer-reviewed scientific journals. To further increase the societal transfer of our research, we are planning to give workshops on our research findings by working together with Lifegarden (https://www.life-garden.org/wer-wir-sind-1). In addition, we aim to make our findings known by national and international initiatives of public health relevance of which we are members, e.g. the Swiss Stressnetwork (www.stressnetwork.ch), the DynaMORE project (Dynamic Modelling of Resilience https://dynamore-project.eu/), INTRESA (https://intresa.org/what-is-resilience/) and other science-based consortia and strategic developments. This will allow to bring this project’s findings to the public domain and disseminate clinical implications to the public in a prevention and resilience framework. Taken together, this project initiates a new line of collaborative research between the Universities of Zürich and Geneva and the New School in New York with a great potential to be continued in further research projects (e.g. SNF or EU-funded). This will strengthen research collaborations between UZH and UniGE and likely have an important impact on global mental health. References Policymakers turn to science for insights about how to improve citizens’ quality of life. Stress-related disorders are the leading cause of disability worldwide and have an increasingly high socio-economic burden on companies and healthcare systems. While the damage caused by these disorders can be quantified (for example via hospitalization numbers), combating their devastating effects requires early identification and prevention. Unfortunately, a robust and precise method to quantify current and future population stress-related mental-health status does not exist. This project will lay the foundation for a robust quantification and early prediction of the swiss population’s stress related mental health status. The crucial advancement of the proposal is the use of large scale neurophysiological and behavioural measures in a representative Swiss population sample of all age groups. The combination of these measures with standardized mental health surveys via a smart-phone app, will engage the citizen participants with scientific research and allow to build a predictive model of stress-related symptom trajectories for policy making and large-scale interventions. The project will employ a novel neurophysiological index of individual stress resilience which is easy to use at all ages and which will allow objective quantification beyond self reports. After successful completion at the UNIGE and UZH, the project aims to expand to further Swiss Cantons and beyond Swiss borders. The deployment of high-impact technologies, such as drones, touches upon a number of ethical and societal issues, it is of paramount importance to establish a knowledge base on this topic. Currently, however, there is a lack of empirical knowledge on the prevailing perceptions about, and attitudes toward, urban use of drones, both in the mainstreamed public discourse and in the scientific community. This epistemological lacuna suggests a lack of awareness to the normative implications, where issues pertaining to access and equity, benefit sharing, harm and risk, consent, allocation of public resources, job loss, etc. may be overlooked. Further still, directly or indirectly, these issues have profound societal impacts on public policy setting and individual wellbeing. The increasing demands and high potentials of drones used in urban environments, hence, requires nuanced understandings about the technicalities of the technology, the ethical risks associated to it, the regulatory frameworks within which it operates, and ultimately the societal acceptability of its deployment at scale. Against this backdrop, interdisciplinary research encompassing expertise from robotics, public health, humanitarian studies, as well as the ethical, legal and social implications of technology, is needed to bring light to the topic. In this project, we aim to connect science with society and politics for a more resilience world through collaborations with domain experts and industry and government stakeholders. The pilot study will be led by Dr. Ning Wang, Research Fellow of the Digital Society Initiative at University of Zurich (UZH), in collaboration with Prof. Karl Blanchet, Director of the Geneva Centre of Humanitarian Studies at the University of Geneva (UNIGE), across 15 months. This collaboration emerged from, and is a natural extension of, a research exchange between the two researchers supported by the UZH-UNIGE Strategic Partnership Program in September 2021. Given its unique topicality and time criticality, the Canton of Zurich has committed strong support since the inception of the project. Likewise, the Canton of Geneva has conveyed significant interest in the acceptability of the “air ambulance” application. Apart from the core research team, partnerships have also been built with a number of key stakeholders, including academic institutions, industry members, public administration and regulatory authorities, special interest groups and think-tanks. The UZH-UNIGE Strategic Partnership Grant, hence, will provide a seed fund for us to kick-start the research and to continue with the subsequent grant applications. As such, the pilot study proposed in this application plays a pivotal role in the preparation for, and successful launch of, the larger research project in 2023.
https://unige-cofunds.ch/university-of-zurich/call-2022
The following reports on multiple studies in a line of research examining the use of emotionally expressive writing as a means of altering the experiences of state anger and negative affect. This line of research has also sought to develop an iterative economic version of the prisoner's dilemma game as a behavioral measure of changes in state anger. Preliminary studies demonstrated evidence that expressive writing about an angry memory does trigger initial activations of state anger and negative affect but that subsequent repeated writing does lead to reductions in activation of state anger and negative affect. The current study sought to expand upon those prior findings and more adequately test whether or not such reductions in the activation of state anger and negative affect can be attributed to habituation as a mechanism of change. The differential effects of different schedules of writing/exposure were also investigated. The current study reports data from 100 student participants. All participants participated in three study sessions scheduled two to three days apart. Participants were randomly assigned to one of four conditions: A Spaced Exposure Condition in which participants wrote about an angry memory once on each of three participation days. A Massed Exposure with Long Retention condition in which participants wrote twice about an angry memory on the first day, did not write the second day, and wrote again about an angry memory the third day. A Massed Exposure with Brief Retention condition in which participants did not write the first day, wrote twice about an angry memory the second day, and wrote once about an angry memory the final day. And a Neutral Writing Control group in which participants wrote about different emotionally neutral memories on each of the first two days and an angry memory on the final day. All participants played the economic prisoner's dilemma game on the first and last day of participation to examine differences in competitive behavior that may correlate with amount of expressive writing and levels of state anger and negative affect. The results found that expressive writing about an angry memory was consistently effective in triggering an acute increase in state anger and negative affect. There was some evidence of both within session and between session reductions of state anger and negative affect following repeated writing about an angry memory; however, these effects were tenuous and not able to be dissociated from uncontrolled factors occurring with the passage of time. Therefore, the results were unable to demonstrate evidence for habituation as a mechanism of change. The results are not able to provide support for any differential advantage to spaced or massed exposure sessions. The study does not support the use of the economic version of prisoner's dilemma game as a behavioral measure of changes in state anger. The limitations of the study and potential future empirical directions are discussed. Patrick, Cory James, "The Therapeutic Expression of Anger: Emotionally Expressive Writing and Exposure" (2013). Theses and Dissertations. 238.
https://dc.uwm.edu/etd/238/
society as a whole, helping students to understand how individuals function within different contexts and how this is influenced by culture, shaping people's values, attitudes and beliefs. Students will be involved in the exploration and analysis of data to illustrate how scientific research methods are used to examine phenomena such as intelligence and personality. The focus for this semester is on understanding ourselves, and the way that we interact with others. Students will build on their understanding of personality from Unit 1 and look further to consider how personality testing is used in different settings; from understanding performance, to a predictor of success in the field of employment. This unit has a strong emphasis on our relationships with others and students will look at the keys to forming strong relationships and build conflict resolution skills. Unit 3 will also provide students with an opportunity to understand features of the mind to support them in their studies, considering memory and strategies for improving the retrieval of information. This unit really allows students to focus on how to improve and enhance human understanding and learning, a useful tool for many, as they are readying to leave high school! This semester students are presented with the opportunity to explore the inner workings of our brain. They will explore case studies that allow them to understand the role of different parts of the brain in how our mind works. They will then have the chance to look at the modern methods of understanding the operation of our brains such as EEGs, CAT scans and fMRIs. Students will delve beyond the brain to explore the other factors that influence our development and how we grow into adults. They will then have the opportunity to consider the roles that they play into society and consider the things that influence both their behaviour, and that of others. Students will have the opportunity to consider some of the big issues in society today and build awareness of both their own role in society and how they can help to combat such issues. This unit allows students to consider the world that they live in and helps to build key skills to make positive choices in their lives after school. This course will not have an external examination, however, students will be required to sit an externally set task (50 minutes duration). Note - The EST is a compulsory assessment subject to the same attendance requirements as Senior School Exams. This General (non-ATAR) Psychology course is a fantastic subject in which to explore the nature of human behaviour, thinking and relationships. It is extremely useful in gaining insight into how people operate as individuals, within group situations and as a part of society as a whole. This course is great for students studying a non-ATAR pathway who want a challenge in their final year of high school without the pressures of external examinations. Their studies of Psychology will also have an emphasis on the scientific method and how to conduct scientific research, assisting students to build valuable skills to allow them to critically review information and consider evidence presented to them in all aspects of their lives. The classes will develop skills in a discussion based, engaging and interesting environment. The study of psychology is highly relevant to further studies in health professions, education, human resources, social sciences, sales, media and marketing, and aims to provide a better understanding of human behaviour and the means to enhance quality of life. Potential excursion costs of up to $90.
http://duncraigshs.wa.edu.au/course-list/year-12/psychology-units-3-4-general
Emory scientists find marker for long-term immunity Scientists at the Emory Vaccine Center and The Scripps Research Institute have found a way to identify which of the T cells generated after a viral infection can persist and confer protective immunity. Because these long-lived cells protect against reinfection by “remembering” the prior pathogen, they are called memory T cells. This discovery about the specific mechanisms of long-term immunity could help scientists develop more effective vaccines against challenging infections. The research, by Susan M. Kaech, PhD, a postdoctoral fellow in microbiology and immunology at Emory University School of Medicine, and principal investigator Rafi Ahmed, PhD, director of the Emory Vaccine Center and a Georgia Research Eminent Scholar, was published online November 16 and will be printed in the December issue of Nature Immunology. Other members of the research team were E. John Wherry and Bogumila T. Konieczny of Emory University School of Medicine, and Joyce T. Tan and Charles D. Surh of The Scripps Research Institute. During an acute viral infection, CD4 and CD8 T cells activated by specific viral antigens dramatically expand in number and become effector T cells. These cells kill the virus-infected cells and also produce cytokines. Most effector cells die within a few weeks, after their initial job is complete. Only about 5 to 10 percent survive to become long-term memory cells, which are capable of mounting a strong and rapid immune response when they come into contact with the original virus, even years later. Scientists have not clearly understood the mechanisms of memory cell production, and a major unanswered question has been how to distinguish the small fraction of cells likely to survive in long-term memory. This team of investigators found that expression of the interleukin 7 (IL-7) receptor, which binds the cytokine IL-7 and is required for T cell survival, is increased in a small subset of CD8 T cells generated during an acute infection, and that expression of this receptor marks those that will survive to become long-lived memory CD8 T cells. In experiments with mice, the Emory scientists found that at the peak of the CD8 T cell immune response during an acute viral infection a small subset of effector cells had a higher expression of the IL-7 receptor, and they hypothesized that these cells would be the ones to survive as memory cells. They transferred a group of cells with and without this distinguishing characteristic into mice that were unexposed to virus, and found that in fact the cells expressing IL-7 receptor survived and differentiated into long-lived memory cells. They also found that IL-7 signals were necessary for the survival of these cells. “We can consider the IL-7 receptor a marker of cellular fitness for long-term survival and functionality,” says Dr. Kaech. “This new knowledge should help us in assessing and predicting the number and quality of memory T cells that will be generated after infection or immunization. It also could lead to the identification of additional markers of memory cells and provide a more comprehensive picture of memory cell development.” “As scientists struggle to create long-term, effective vaccines for difficult diseases, they need a detailed understanding of the mechanisms of long-term memory,” says Dr. Ahmed. “Understanding immune memory is the necessary basis for developing any type of effective vaccine. In addition, these findings could help in designing immunotherapies to control chronic viral infections and cancer.” Media Contact Further information:http://www.emory.edu/ All news from this category: Life Sciences Articles and reports from the Life Sciences area deal with applied and basic research into modern biology, chemistry and human medicine. Valuable information can be found on a range of life sciences fields including bacteriology, biochemistry, bionics, bioinformatics, biophysics, biotechnology, genetics, geobotany, human biology, marine biology, microbiology, molecular biology, cellular biology, zoology, bioinorganic chemistry, microchemistry and environmental chemistry.
https://www.innovations-report.com/life-sciences/report-23570/
Health and climate have been linked since antiquity. In the fifth century B.C., Hippocrates observed that epidemics were associated with natural phenomena rather than deities or demons. In modern times, our increasing capabilities to detect and predict climate variations such as the El Niño/Southern Oscillation (ENSO) cycle, coupled with mounting evidence for global climate change, have fueled a growing interest in understanding the impacts of climate on human health, particularly the emergence and transmission of infectious disease agents. Simple logic suggests that climate can affect infectious disease patterns because disease agents (viruses, bacteria, and other parasites) and their vectors (such as insects or rodents) are clearly sensitive to temperature, moisture, and other ambient environmental conditions. The best evidence for this sensitivity is the characteristic geographic distribution and seasonal variation of many infectious diseases. Weather and climate affect different diseases in different ways. For example, mosquito-borne diseases such as dengue, malaria, and yellow fever are associated with warm weather; influenza becomes epidemic primarily during cool weather; meningitis is associated with dry environments; and cryptosporidiosis outbreaks are associated with heavy rainfall. Other diseases, particularly those transmitted by direct interpersonal contact such as HIV/AIDS, show no clear relationship to climate. By carefully studying these associations and their underlying mechanisms, we hope to gain insights into the factors that drive the emergence and seasonal/interannual variations in contemporary epidemic diseases and, possibly, to understand the potential future disease impacts of long-term climate change. The U.S. federal agencies entrusted with guarding the nation's health and the environment, along with other concerned institutions, requested the forma- Page 2 tion of a National Research Council committee to evaluate this issue. Specifically, the committee was asked to undertake the following three tasks: 1. Conduct an in-depth, critical review of the linkages between temporal and spatial variations of climate and the transmission of infectious disease agents; 2. Examine the potential for establishing useful health-oriented climate early-warning and surveillance systems, and for developing effective societal responses to any such early warnings; 3. Identify future research activities that could further clarify and quantify possible connections between climate variability, ecosystems, and the transmission of infectious disease agents, and their consequences for human health. There are many substantial research challenges associated with studying linkages among climate, ecosystems, and infectious diseases. For instance, climate-related impacts must be understood in the context of numerous other forces that drive infectious disease dynamics, such as rapid evolution of drug- and pesticide-resistant pathogens, swift global dissemination of microbes and vectors through expanding transportation networks, and deterioration of public health programs in some regions. Also, the ecology and transmission dynamics of different infectious diseases vary widely from one context to the next, thus making it difficult to draw general conclusions or compare results from individual studies. Finally, the highly interdisciplinary nature of this issue necessitates sustained collaboration among disciplines that normally share few underlying scientific principles and research methods, and among scientists that may have little understanding of the capabilities and limitations of each other's fields. In light of these challenges, the scientific community is only beginning to develop the solid scientific base needed to answer many important questions, and accordingly, in this report the committee did not attempt to make specific predictions about the likelihood or magnitude of future disease threats. Instead, the focus is on elucidating the current state of our understanding and the factors that, at present, may limit the feasibility of predictive models and effective early warning systems. The following is a summary of the committee's key findings and recommendations: KEY FINDINGS: LINKAGES BETWEEN CLIMATE AND INFECTIOUS DISEASES Weather fluctuations and seasonal-to-interannual climate variability influence many infectious diseases. The characteristic geographic distributions and seasonal variations of many infectious diseases are prima facie evidence of linkages with weather and climate. Studies have shown that factors such as temperature, precipitation, and humidity affect the lifecycle of many disease pathogens and vectors (both directly, and indirectly through ecological changes) and thus Page 3 can potentially affect the timing and intensity of disease outbreaks. However, disease incidence is also affected by factors such as sanitation and public health services, population density and demographics, land use changes, and travel patterns. The importance of climate relative to these other variables must be evaluated in the context of each situation. Observational and modeling studies must be interpreted cautiously. There have been numerous studies showing an association between climatic variations and disease incidence, but such studies are not able to fully account for the complex web of causation that underlies disease dynamics and thus may not be reliable indicators of future changes. Likewise, a variety of models have been developed to simulate the effects of climatic changes on incidence of diseases such as malaria, dengue, and cholera. These models are useful heuristic tools for testing hypotheses and carrying out sensitivity analyses, but they are not necessarily intended to serve as predictive tools, and often do not include processes such as physical/biological feedbacks and human adaptation. Caution must be exercised then in using these models to create scenarios of future disease incidence, and to provide a basis for early warnings and policy decisions. The potential disease impacts of global climate change remain highly uncertain. Changes in regional climate patterns caused by long-term global warming could affect the potential geographic range of many infectious diseases. However, if the climate of some regions becomes more suitable for transmission of disease agents, human behavioral adaptations and public health interventions could serve to mitigate many adverse impacts. Basic public health protections such as adequate housing and sanitation, as well as new vaccines and drugs, may limit the future distribution and impact of some infectious diseases, regardless of climate-associated changes. These protections, however, depend upon maintaining strong public health programs and assuring vaccine and drug access in the poorer countries of the world. Climate change may affect the evolution and emergence of infectious diseases. Another important but highly uncertain risk of climate change are the potential impacts on the evolution and emergence of infectious disease agents. Ecosystem instabilities brought about by climate change and concurrent stresses such as land use changes, species dislocation, and increasing global travel could potentially influence the genetics of pathogenic microbes through mutation and horizontal gene transfer, and could give rise to new interactions among hosts and disease agents. Such changes may foster the emergence of new infectious disease threats. There are potential pitfalls in extrapolating climate and disease relationships from one spatial/temporal scale to another. The relationships between Page 4 climate and infectious disease are often highly dependent upon local-scale parameters, and it is not always possible to extrapolate these relationships meaningfully to broader spatial scales. Likewise, disease impacts of seasonal to interannual climate variability may not always provide a useful analog for the impacts of long-term climate change. Ecological responses on the timescale of an El Niño event, for example, may be significantly different from the ecological responses and social adaptations expected under long-term climate change. Also, long-term climate change may influence regional climate variability patterns, hence limiting the predictive power of current observations. Recent technological advances will aid efforts to improve modeling of infectious disease epidemiology. Rapid advances being made in several disparate scientific disciplines may spawn radically new techniques for modeling of infectious disease epidemiology. These include advances in sequencing of microbial genes, satellite-based remote sensing of ecological conditions, the development of Geographic Information System (GIS) analytical techniques, and increases in inexpensive computational power. Such technologies will make it possible to analyze the evolution and distribution of microbes and their relationship to different ecological niches, and may dramatically improve our abilities to quantify the disease impacts of climatic and ecological changes. KEY FINDINGS: THE POTENTIAL FOR DISEASE EARLY WARNING SYSTEMS As our understanding of climate/disease linkages is strengthened, epidemic control strategies should aim towards complementing “surveillance and response” with “prediction and prevention.” Current strategies for controlling infectious disease epidemics depend largely on surveillance for new outbreaks followed by a rapid response to control the epidemic. In some contexts, however, climate forecasts and environmental observations could potentially be used to identify areas at high risk for disease outbreaks and thus aid efforts to limit the extent of epidemics or even prevent them from occurring. Operational disease early warning systems are not yet generally feasible, due to our limited understanding of most climate/disease relationships and limited climate forecasting capabilities. But establishing this goal will help foster the needed analytical, observational, and computational developments. The potential effectiveness of disease early warning systems will depend upon the context in which they are used. In cases where there are relatively simple, low-cost strategies available for mitigating risk of epidemics, it may be feasible to establish early warning systems based only on a general understanding of climate/disease associations. But in cases where the costs of mitigation actions are significant, a precise and accurate prediction may be necessary, re- Page 5 quiring a more thorough mechanistic understanding of underlying climate/disease relationships. Also, the accuracy and value of climate forecasts will vary significantly depending on the disease agent and the locale. For instance, it will be possible to issue sufficiently reliable ENSO-related disease warnings only in regions where there are clear, consistent ENSO-related climate anomalies. Finally, investment in sophisticated warning systems will be an effective use of resources only if a country has the capacity to take meaningful actions in response to such warnings, and if the population is significantly vulnerable to the hazards being forecast. Disease early warning systems cannot be based on climate forecasts alone. Climate forecasts must be complemented by an appropriate suite of indicators from ongoing meteorological, ecological, and epidemiological surveillance systems. Together, this information could be used to issue a “watch” for regions at risk and subsequent “warnings” as surveillance data confirm earlier projections. Development of disease early warning systems should also include vulnerability and risk analysis, feasible response plans, and strategies for effective public communication. Climate-based early warning systems being developed for other applications, such as agricultural planning and famine prevention, provide many useful lessons for the development of disease early warning systems. Development of early warning systems should involve active participation of the system's end users. The input of stakeholders such as public health officials and local policymakers is needed in the development of disease early warning systems, to help ensure that forecast information is provided in a useful manner and that effective response measures are developed. The probabilistic nature of climate forecasts must be clearly explained to the communities using these forecasts, so that response plans can be developed with realistic expectations for the range of possible outcomes. RECOMMENDATIONS FOR FUTURE RESEARCH AND SURVEILLANCE Research on the linkages between climate and infectious diseases must be strengthened. In most cases, these linkages are poorly understood and research to understand the causal relationships is in its infancy. Methodologically rigorous studies and analyses will likely improve our nascent understanding of these linkages and provide a stronger scientific foundation for predicting future changes. This can best be accomplished with investigations that utilize a variety of analytical methods (including analysis of observational data, experimental manipulation studies, and computational modeling), and that examine the consistency of climate/disease relationships in different societal contexts and across a variety of temporal and spatial scales. Progress in defining climate and infec- Page 6 tious disease linkages can be greatly aided by focused efforts to apply recent technological advances such as remote sensing of ecological changes, high-speed computational modeling, and molecular techniques to track the geographic distribution and transport of specific pathogens. Further development of disease transmission models is needed to assess the risks posed by climatic and ecological changes. The most appropriate modeling tools for studying climate/disease linkages depend upon the scientific information available. In cases where there is limited understanding of the ecology and transmission biology of a particular disease, but sufficient historical data on disease incidence and related factors, statistical-empirical models may be most useful. In cases where there are insufficient surveillance data, “first principle” mechanistic models that can integrate existing knowledge about climate/disease linkages may have the most heuristic value. Models that have useful predictive value will likely need to incorporate elements of both these approaches. Integrated assessment models can be especially useful for studying the relationships among the multiple variables that contribute to disease outbreaks, for looking at long-term trends, and for identifying gaps in our understanding. Epidemiological surveillance programs should be strengthened. The lack of high-quality epidemiological data for most diseases is a serious obstacle to improving our understanding of climate and disease linkages. These data are necessary to establish an empirical basis for assessing climate influences, for establishing a baseline against which one can detect anomalous changes, and for developing and validating models. A concerted effort, in the United States and internationally, should be made to collect long-term, spatially resolved disease surveillance data, along with the appropriate suite of meteorological and ecological observations. Centralized, electronic databases should be developed to facilitate rapid, standardized reporting and sharing of epidemiological data among researchers. Observational, experimental, and modeling activities are all highly interdependent and must progress in a coordinated fashion. Experimental and observational studies provide data necessary for the development and testing of models; and in turn, models can provide guidance on what types of data are most needed to further our understanding. The committee encourages the establishment of research centers dedicated to fostering meaningful interaction among the scientists involved in these different research activities through long-term collaborative studies, short-term information-sharing projects, and interdisciplinary training programs. The National Center for Ecological Analysis and Synthesis provides a good model for the type of institution that would be most useful in this context. Page 7 Research on climate and infectious disease linkages inherently requires interdisciplinary collaboration. Studies that consider the disease host, the disease agent, the environment, and society as an interactive system will require more interdisciplinary collaboration among climate modelers, meteorologists, ecologists, social scientists, and a wide array of medical and public health professionals. Encouraging such efforts requires strengthening the infrastructure within universities and funding agencies for supporting interdisciplinary research and scientific training. In addition, educational programs in the medical and public health fields need to include interdisciplinary programs that explore the environmental and socioeconomic factors underlying the incidence of infectious diseases. Numerous U.S. federal agencies have important roles to play in furthering our understanding of the linkages among climate, ecosystems, and infectious disease. There have been a few programs established in recent years to foster interdisciplinary work in applying remote-sensing and GIS technologies to epidemiological investigations. The committee applauds these efforts and encourages all of the relevant federal agencies to support interdisciplinary research programs on climate and infectious disease, along with an interagency working group to help ensure effective coordination among these different programs. The U.S. Global Change Research Program (USGCRP) may provide an appropriate forum for this type of coordinating body. This will require, however, that organizations such as the Centers for Disease Control and Prevention, and the National Institute of Allergy and Infectious Diseases become actively involved with the USGCRP. Finally, the committee wishes to emphasize that even if we are able to develop a strong understanding of the linkages among climate, ecosystems, and infectious diseases, and in turn, are able to create effective disease early warning systems, there will always be some element of unpredictability in climate variations and infectious disease outbreaks. Therefore, a prudent strategy is to set a high priority on reducing people's overall vulnerability to infectious disease through strong public health measures such as vector control efforts, water treatment systems, and vaccination programs.
https://www.nap.edu/read/10025/chapter/2
‘Resilience is essential to the good teacher’ is the opening statement of Barnes’s paper that looks at the perseverance and fulfilment of teachers (p74). I think there are very few people who could dispute the validity of this statement and quite rightly so. I feel that this study is a very important one to consider when discussing resilience and efficacy in individual teachers as well as teacher retention strategy. Barnes’s methodology is a very interesting one; his paper presents factors that have sustained his commitment and resilience in education and are compared to the narratives and experiences of nine long serving and fulfilled teacher friends for correlations and themes. The method of autoethnography is used for data collection, choosing to explore the self-reflections, autobiographies, diaries, conversations, personal accounts and discussions of his sample, which includes Barnes himself, to define good resilience and its affecting factors. What is dynamic about this approach and makes it different from traditional studies is that Barnes’s literature review is an autoethnographic account of his own experiences, how he is a fulfilled teacher and his sample are his friends. I like how this brings a human essence to the research, rich with qualitative data, which includes unbounded and free conversations and reflections with friends and people of trust and at the same time are able to define key factors that affect and form teacher resilience. It’s an uncommon method to come across but one that I believe will prove enriching in answering the title question of the paper. The suggestions To simply state what Barnes’s research suggests, it is that resilience is achieved when the role of being a teacher is aligned with an individual’s personal values, where that individual has a chance to build supportive and friendly relationships with others and can teach and express their personal interests through creative contexts. Barnes proposes that these factors should be central to initial teacher-training programmes to increase the competency of resilience in teachers. It’s an idea that almost invokes the concept that teaching is a vocation and the ability to be a teacher, a good teacher, is in-built in some people and not in others. Personality and psychology are emerging to be prominent considerations when assessing teacher efficacy and resilience and there have been many studies that have looked at these areas when evaluating teachers. However, how far these considerations are taken when recruiting, training, developing and supporting teachers is not so clear and is a worthy line of enquiry to make. Another interesting question to reflect on here would be this: are there some of us who are innately more suited than teaching than others and if so, does that mean teaching is an exclusive profession? After evaluating the data sources provided by his sample, Barnes was able to find correlations and themes including family, friends, love, emotions, religion, work and place as elements considered to be the basis of fulfilment and that from these themes and their sub-categorisations, values, creativity and friendship did the most to create feelings of resilience. When presenting his findings after an extensive analysis of individual comments and experiences, Barnes categorised his findings under four main headings in order to answer the title question of the paper: values and resilience, creativity and meaning, friendship and community and teacher education and children. ‘If schools are to help create a happy, present and sustainable future for children, their teachers should be confident, emotionally intelligent, flexible, healthy, optimistic, positive people’. (p74) Values and Resilience Barnes’s findings suggest that work-life factors of resilience such as hopefulness, fulfilment and joy were present more in scenarios when professional and personal values matched. He found that all participants evidenced and expressed how they were able to express their personal values through their teaching and believed that this had a positive effect on their students. Although a number of the participants had admitted that teaching was not their first choice of profession, they did realise and acknowledge that being a teacher and working in education had taken them further towards their ideals, which brought them great satisfaction. In addition to this, as Barnes had previously stated that values, creativity and friendship did the most to create and strengthen resilience as well as the expression of personal values through teaching, the participants also acknowledged that their personal values could be creatively played-out in the classroom where they could be appreciated and shared, therefore bringing about satisfaction from having their personal values developed, sustained and appreciated. This could be misunderstood as vanity, egoism or narcissism but when you look at the values that these teachers held including family, love, friendship and religion amongst others, one could argue that fulfilment is a justified interest for teachers to find in their professions upon an acceptable basis. Now of course, values will differ from teacher to teacher, Barnes acknowledges this quite early on in his paper, but there are shared values amongst a teaching staff that can be identified like the ones Barnes was able to find amongst his sample group. The suggestion therefore arises that it is a worthy investment of time and effort to identified shared morals between a staff group through which resilience can begin to be formed. How to define which values are acceptable to base teacher fulfilment on will need a researched informative test in order for the process of fulfilment not to become one of vanity, egoism or narcissism, three factors that are often stated to be characteristic of poor teachers and leaders. ‘Investigating resilience involves searching beneath the selves we habitually present to the world.’ (pg75) Creativity and Meaning Expression is key not only to this study but to life, so it is asserted in the discussions that Barnes quotes and comments on during his analysis of findings chapter, notwithstanding that fulfilment can come from a personal acknowledgment that one is expressing himself freely through creativity, Barnes also finds and states that fulfilment is also formed and sustained to higher levels if an individual knows that others are benefiting from their creativity as well. In addition, Barnes’s findings affirm that individuals feel competent, fulfilled and are more capable of growth and progress, extended and progressive learning and sustained relationships when they feel that they are directly engaging and working with their creative strengths and talents. Taking this into consideration, when we think about factors that teachers and educators often stipulate as burn out factors such as exhaustion, lack of development opportunities, lack of resources and fraught relationships with colleagues and leaders, Barnes’s finding places creative expression with personal meaning central to the prospect of growth and learning, which leads to fulfilment and results in resilience. It’s the idea that teachers as professional and educated individuals should be provided with opportunities to express their creativity and talent, not to be stifled, overlooked or assigned CPD that does not suit their creative strengths. It’s the realisation and understanding of the teacher as an individual which then motivates teachers to actualise their potential, build resilience and remain in teaching because they feel recognised by colleagues and leaders, forming a strong sense of identity and self worth. ‘Reasons given for teacher dissatisfaction include: lack of support, increasing complexity, discipline problems, pension changes and declining resources.’ (p74) Friendship and Community Barnes states that ‘enduring relationships are crucial to personal well-being and the sustainability of communities’ and that ‘friendships and communities grow from shared, active situations where people support each other in addressing genuine challenges.’(p84) When teachers are engaged in this manner, they become more receptive, are open to new relationships as well as sustaining the ones they already have and are relaxed and confident in their role. The key conditions here are shared efficacy and camaraderie, a collective effort and support network that you form with others so that you can fulfil your roles and responsibilities as a teacher. Teacher Education and Children Teacher education is the idea of initial teacher training provision for teachers and how it must change to create the found conditions above that would aid and provide opportunities for teachers to develop fulfilment and resilience in their roles. ‘Contributions from neuroscience and psychology add weight to my argument that teacher education must change if we are positively to affect the resilience of teachers’ (p84) – it’s the idea of looking at personality and psychology as mentioned earlier and how values, creativity and friendship in addition to the sub-categories of efficacy evaluation in this project are to be emphasised, worked on and considered when recruiting, training and retaining teachers. Without sounding too simplistic, the findings that Barnes presents are elements of an effective formula for a very important and serious question that education leaders should be asking – how do we keep our teachers? The emphasis here returns to knowing teachers not just as employees and within their teacher persona but as a whole individual, understanding their expertise, experiences, values, interests and strengths and allowing them to flourish within and through them whilst providing development and support opportunities. This seems to be the foundational condition for good resilience in teachers, which then allows for creative and value-driven expression and also to foster and sustain good relationships; the growth potential is perpetual. ‘…teacher education courses in England devote very little time to developing positive, sustainable and moral attitudes towards self and teaching, and even less towards helping trainees mature as individuals. (p74) Conclusion There are clear strengths and limitations to this project. By delving into personal histories, characters and opinions with a sample of ten participants including himself, Barnes uncovers some very important, valid and useful findings that through mere hypothetical application can be considered to be effective in understanding what makes teachers feel fulfilled in their roles. The term ‘fulfilment’ can be defined in a number of ways and the fact that Barnes does not define the term in the project does not weaken or skew his findings because true to the discussions of self efficacy, character, personality and psychology, fulfilment is a subjective term – what fulfilment is to one teacher may not be for the other. This being said, if I say here that fulfilment means to achieve something or meeting a standard set by an individual, does the argument exist that this is something we do not want our teachers to have? The problem is not that fulfilment is subjective and difficult to define but why is it that so many teachers do not feel fulfilled, because if they did, they wouldn’t be leaving the profession. Leaders need to take Barnes’s declaratives and turn them into interrogatives and ask themselves: 1. Do I know my staff well as individuals and as a group? 2. Do I know what their individual and shared values are and where their strengths lie? 3. Do we provide relevant development and support opportunities for our staff to be able to express their values through creative contexts and do we help them face the challenges of being a teacher? 4. How strong are our colleague relationships? 5. How strong is the rapport and relationship between leaders and staff? Although this is not an exhaustive list of questions that can emerge from this study, the positive responses to these questions would be good standards to use to establish a culture and climate where teachers are able to build resilience and find fulfilment. These are very important considerations for school leaders to have when forming performance management, CPD, staff recruitment and retention strategies if they want to achieve a staff body that thrives in their professional lives. Surely this is the ultimate goal, isn’t it? In terms of limitations, we have a small sample of male teachers who are friends from which these findings have been uncovered. It can therefore be argued that pre-existing friendship made it easier for Barnes to draw correlations and identify themes such as family, friends, love, religion, work and place as friendship is usually formed with people in whom we find similarities, shared values and morals as well as shared histories and experiences. Additionally, the lack of gender, ethnic or age variation within the sample can also be considered as an influential factor in the correlated and thematic findings therefore rendering the framework limiting and the findings manipulated and too obscure to be applied to the teacher workforce. What would be interesting to conduct to verify Barnes’s findings would be to apply the findings to a random, wider and more representative sample of the British teacher workforce so that the validity and impact of his researched findings can be accurately evaluated. Are these conditions for good resilience only applicable to Barnes and his nine friends or can they be useful for other teachers too that do not share the same values, experiences, autobiographies, religion, moral codes and characters as the participant sample? Looking at the evidence of the effects of Barnes’s findings on the participants in this project, I am led to believe that they can have a wider effective application and relevance but I wonder what other common themes would come from a wider application that would better inform what leads to fulfilment in teaching. Barnes concludes his paper with a very important statement: ‘If resilience is related to congruence between personal and institutional values, if supportive and flexible environments nurture resilience and if resilience grows by identifying and extending our creative strengths, teacher education should be founded upon these things. Teacher remain the most important resource for the education of children.’ (p85) Can anyone reasonably disagree with this? References: Barnes, J. (2013). What Sustains a Fulfilling Life in Education?. Journal of Education and Training Studies. 1 (2). This website and its content is copyright of © Teach Care 2021. All rights reserved.
https://teachcare.co.uk/review-what-sustains-a-fulfilling-life-in-education/
With the explosion in brain research during the past 10 to 15 years, scientists now know more about the brain than ever before. Advancements in brain training software designed to improve memory, attention and other aspects of cognition offer organizations a new way to think about, develop, and deliver training. These advances are paving the way to a fundamental transformation to the traditional and outdated approach to corporate learning. So how is this research set to rewire the corporate brain, and the world of eLearning? I was fortunate to sit down with Dr. Alice Kim of the Rotman Research Institute at Baycrest Health Sciences in Toronto, a premier international center for the study of the human brain, along with Carol Leaman, CEO of Axonify to discuss what the latest brain research has to offer in terms of practical, accessible, and scientifically proven alternatives to traditional corporate learning. Our CEO Carol Leaman (Left) and Dr. Alice Kim (Right). Q: What has led to the explosion in brain/memory research over the last decade? Dr. Kim: There are a number of factors that have contributed to the recent surge in brain research, but certainly, advancements in state of the art imaging technologies and methods of analysis are allowing neuroscientists to investigate the brain with more precision and in greater detail than ever before. Advancements in machinery, including, for example, simultaneous functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) recordings, which provides a high structural and temporal resolution of brain activity, are enabling scientists to uncover new insights about cognition and the brain. Q: Can you summarize some of the key findings? Dr. Kim: Well we know that there are multiple factors that affect our brain and cognition. For example, there is a lot of research demonstrating the effects of age and lifestyle. We know that as we age, on average, many aspects of cognition become less efficient. This would include the speed at which we can process information, as well as our long-term memory. At the same time, other aspects of cognition remain relatively resistant to cognitive aging, including, for example, our world knowledge. In terms of lifestyle, the science tells us that regular exercise, healthy eating and sleep habits, as well as social activities and interactions all benefit our cognition. The science also shows us that our brains are not static, and that it can change in form and structure as a result of our lifestyle choices. This ability for the brain to constantly change throughout the lifespan is referred to as neuroplasticity. Whereas a healthy lifestyle will lead to positive changes in the brain and cognition, for example, an increase in the number and strength of connections between neurons, an unhealthy lifestyle will lead to the opposite. Q: Can you discuss in more detail the one aspect of cognition that we’re all consumed with – memory? Dr. Kim: Very generally, there are two types of memory. The first type is called declarative memory. If you can tell me what you did last weekend, you’re tapping into your declarative memory. The second type of memory is called non-declarative or procedural memory, and this type of memory comes into play when you read, tie a shoelace or ride a bicycle. Past research has identified principles of memory that we can all use to enhance our memory in our daily lives. This includes the ‘spacing effect’ and the ‘testing effect’. Q: Can you elaborate on the ‘spacing effect’ and the ‘testing effect’ and how they enhance memory? Dr. Kim: The ‘spacing effect’ and the ‘testing effect’ are two of the most robust findings in memory research. The spacing effect (also referred to as ‘distributed practice’ or ‘interval reinforcement’) is a well-documented practice of “drip feeding” information over time with specific spacing in between. Long-term retention of the information in question improves as the spacing between repeated study events increases. Basically, spacing is the opposite of cramming where the same information is practiced repeatedly within a short span of time, and it has been proven to benefit long-term retention in both the lab and in real-world settings. And although we know that cramming may work for the very short term, it doesn’t promote long-term retention. So one cognitive strategy that can be used to improve memory is to space out or distribute our study/practice sessions over time, and this will enhance our knowledge retention. The testing effect refers to the finding that once information can be retrieved from memory, repeatedly retrieving this information is more effective for long-term retention compared to repeated study. In light of this finding, tests and quizzes should not only be regarded as a means of assessing what has been learned, but also as an effective learning tool. In our everyday lives, we can make a habit of quizzing ourselves and frequently retrieving any piece of information that we want to remember. When retrieval practice is combined with spacing, it is referred to as spaced retrieval. This combination has been shown to be very effective for long-term retention. Basically, what this means for us is that we should actively retrieve the information we want to remember, and that we should space out our retrieval over time, as opposed to cramming it into a short span of time.
https://axonify.com/blog/qa-with-a-brain-scientist-the-impact-of-the-latest-research-on-corporate-learning/
ABSTRACT: In this review we detail the impact of climate change on marine productivity, on marine environmental stochasticity and cyclicity, and on the spatio-temporal match–mismatch of seabirds and their prey. We thereby show that global warming has a profound bottom-up impact upon marine top-predators, but that such effects have to be studied in conjunction with the (top-down) impact of human fisheries upon seabird food resources. Further, we propose seabird ecological features, such as memory effects and social constraints, that make them particularly sensitive to rapid environmental change. We provide examples of how seabirds may nonetheless adapt when facing the consequences of climate change. We conclude that our understanding of the spatial ecology of seabirds facing environmental change is still rudimentary, despite its relevance for the conservation of these vulnerable organisms and for the management of marine ecosystems. We define the following research priorities. (1) Determine the factors affecting seabird distribution and movements at sea using biotelemetry, as well as colony dynamics on land. (2) Link seabird distribution patterns to those of their prey. (3) Determine further the role of historical and metapopulation processes in contributing to the dynamics of the spatial distribution of seabirds. (4) Assess phenotypic plasticity and the potential for microevolution within seabird spatial responses to climate change, since both will greatly affect the quality of modelling studies. (5) Adapt existing models to define and predict the impact of climate change onto seabird spatial dynamics. (6) Synthesize all gathered information to define marine protected areas and further conservation schemes, such as capacity reduction of fisheries. This research effort will require maintaining existing long-term monitoring programmes for seabirds, as well as developing new approaches to permit the integration of processes occurring at various scales, in order to be able to fully track the population responses of these long-lived vertebrates to environmental changes.
https://www.int-res.com/abstracts/meps/v391/p121-137/
In Reply: We thank Drs. Boncyk and Hughes for their letter and interest in the article, “Cognitive Decline after Delirium in Patients Undergoing Cardiac Surgery.”1 The research group at Vanderbilt University, Nashville, Tennessee, has conducted seminal work in understanding delirium and long-term cognitive change after hospitalization. We generally agree with the points discussed in the accompanying letter and are glad to see these points emphasized. Our results show a nonlinear trajectory of cognitive status after surgery up to 1 yr postoperatively. We agree that the findings of no difference in cognition by delirium status at 1 yr should be interpreted cautiously, since the 1-yr assessments were not the primary outcome and the study may be underpowered to demonstrate meaningful differences. Although we discussed this limitation and tried to be appropriately cautious in our interpretation of the 1-yr data, this letter highlights an important limitation of our results. We also agree that examining longer-term trajectories is critical, given that Inouye et al.2 showed increased cognitive decline in delirious patients during extended follow-up—from 1 to 3 yr postoperatively. To this end, we are currently examining the feasibility of obtaining cognitive assessments in our study patients at time points greater than 5 yr after surgery. We also agree that patient-reported outcomes are important to consider to evaluate both statistical and clinical significance of our findings and will consider whether patient-related outcomes that we did collect could provide further insight. Finally, the importance of future observational and interventional studies to illuminate mechanisms for delirium cannot be overemphasized. The epidemiology and risk factors for delirium have been well described; a seminal current challenge is to understand mechanisms for the development and consequences of delirium after surgery and critical illness. Competing Interests Dr. Brown has consulted for and received grant funding from Medtronic (Minneapolis, Minnesota). Dr. Hogue is a consultant and provides lectures for Medtronic/Covidien (Boulder, Colorado) and is a consultant to Merck (Kenilworth, New Jersey). Research Support Supported by grants from the National Institutes of Health, Bethesda, Maryland (grant No. K76 AG057020 to Dr. Brown and grant No. RO1 HL092259 to Dr. Hogue).
https://asa2.silverchair.com/anesthesiology/article/130/5/859/18875/Delirium-after-Cardiac-Surgery-and-Cognitive
Type 1 diabetes (T1D) is an insulin-dependent form of diabetes resulting from the autoimmune destruction of pancreatic beta cells. The past few decades have seen tremendous progress in our understanding of the molecular basis of the disease, with the identification of susceptibility genes and autoantigens, the demonstration of several abnormalities affecting various cell types and functions, and the development of improved assays to detect and monitor autoimmunity and beta cell function. New findings about the disease pathology and pathogenesis are emerging from extensive studies of organ donors with T1D promoted by the JDRF nPOD (Network for the Pancreatic Organ Donor with Diabetes). Furthermore, the establishment of extensive collaborative projects including longitudinal follow-up studies in relatives and clinical trials are setting the stage for a greater understanding of the role of environmental factors, the natural history of the disease, and the discovery of novel biomarkers for improved prediction, which will positively impact future clinical trials. Recent studies have highlighted the chronicity of islet autoimmunity and the persistence of some beta cell function for years after diagnosis, which could be exploited to expand therapeutic options and the time window during which a clinical benefit can be achieved.
https://miami.pure.elsevier.com/en/publications/advances-in-the-etiology-and-mechanisms-of-type-1-diabetes
Center for Therapeutic Community Research (CTCR). 2 World Trade Center, 16th Fl. CMR is an 18-item self administered questionnaire. CMR is designed to measure motivation and readiness for treatment and to predict retention in treatment among abusers of illicit drugs. The instrument consists of four factor derived scales:Circumstances 1 (external influences to enter or remain in treatment, Circumstances 2 (internal influences to leave treatment), Motivation (internal recognition of the need to change). Readiness for treatment. Adult and adolescent illicit drug misusers in treatment. In the event that respondents are non literate, the instrument can be read to the respondent. Most clients complete the CMR in less than 10 minutes. CMR consists of 18 Likert type items. The respondents uses a 5-point scale to rate each statement from strongly disagree to strongly agree. Items may also be scored as Not Applicable. Circumstances 1 consists of items 1-3. Circumstances 2 consists of items 4-6. Motivation consists of questions 7-11 and Readiness consists of items 12-18. Scoring involves reversing the score values for questions 4, 5, 6 and 12 (scores of 5=1, 4=2, 3=3, 2=4, and 1=5). The individual score values of each scale are then summed to derive the scale values. The scale values are then summed to derive the Total Score. Not Applicable responses are recoded to the client's mean score for its scale. Approximately 5 minutes to sum the scores and compare scores to reference scores for the agency. There are not special credentials necessary for the administration of the CMR. The major functions of the test administrator is to answer any questions concerning the purpose of the testing, explain the instructions and check the completed instrument. George DeLeon & Gerald Melnick. (Se address above). George DeLeon / Gerald Melnick.Center for Therapeutic Community Research (CTCR). Fax: 1 212 845 4698. There are no costs for the use of the CMR. CMR can be used as an intake device, clinical treatment planning tool, and research instrument. CMR is a useful instrument in the identification of client risk for early drop-out in different treatment modalities, specially for residential therapeutic communities. The authors request the users of CMR being informed about findings using the instrument, so they can update their database and improve the range of comparative data available for clinicians and researchers. Users can contact George DeLeon or Gerald Melnick about advice or help in using the instruments or analyzing the results. CMR must be used as appears (complete). If modifications are necessary, the user can consult with the authors about potential changes in the items. The authors ask the CMR not be used as an item pool from which to select a more limited number of items. DeLeon, G., Melnick, G., Kressel, D., Jainchill, N. Circumstances, motivation, readiness and suitability (The CMRS Scales): Predicting retention in therapeutic community treatment. American Journal of Drug and Alcohol Abuse, 20(4), 495-515, (1994). Based upon clinical considerations, scales were developed measuring client perceptions across four interrelated domains: circumstances (external pressures), motivation (internal pressures), readiness and suitability (CMRS) for residential TC treatment. The paper reports findings on the reliability of the CMRS and its validity as a predictor of retention in TC treatment in three cohorts of consecutive admissions to a long-term residential TC. Discriminant and factor analysis confirm the face validity of the original four rationale scales. Scores distribute into four groups, with most scores in the moderately low to moderately high range. Two cross-validation studies confirm the internal consistency of the scales, and a linear relationship between the separate and total CMRS scores and short-term retention in all three cohorts and long-term retention in two cohorts. The study provides impressive support on the reliability and validity of the CMRS scales as predictors of retention in long-term TCs. Although still experimental, awaiting replicational studies and firm conclusions concerning generalizability, the CMRS holds considerable promise for research, theory and practice. DeLeon, G., Melnick, G., Kressel, D. Motivation and readiness for therapeutic community treatment among cocaine and other drug abusers. American Journal of Drug and Alcohol Abuse, 23(2), 169-189, (1997). In the present study, the CMRS scales are used to assess motivation and readiness for treatment of a large sample of primary alcohol, marijuana, heroin, cocaine and crack cocaine abusers admitted to a long-term therapeutic community. Findings show few significant differences in overall retention or initial motivation and readiness. Initial motivation and readiness scores persist as significant predictors of short-term retention in treatment across most groups. The findings are consistent with prior research emphasising the importance of dynamic factors as determinants of retention. Is the therapeutic community culturally relevant? DeLeon, G., Melnick, G., Schoket, D., Jainchill, N. Is the therapeutic community culturally relevant?. Findings on race/ethnic differences in retention in treatment. Journal of Psychoactive Drugs, 25(1), 77-86, (1993). This paper briefly reviews pertinent research and present findings from recent studies on race/ethnic differences in readiness and suitability for, and retention in, TC treatment. The main instrument used for the study was the CMRS scales. Measures of client perceptions (CMRS levels) were strong correlates of 30-day retention in treatment for the different cultural/ethnic groups, but there were some race/ethnic interactions: the relationships between CMRS scores and 30-day retention in treatment was most stable among Blacks and least stable among Hispanics. One-year retention was related less to initial CMRS scores or to race7ethnic differences, although Blacks tended to maintain their higher retention rates and a more stable relationship between CMRS and retention. Among the low and high scores on the S scale, Hispanics yielded the poorest one-year retention. A framework is outlined for the empirical study of cultural relevance issues in TCs. Motivation and readiness for therapeutic community treatment among adolescent and adult substance abusers. Melnick, G., DeLeon, G., Hawke, J., Jainchill, N., Kresell, D. Motivation and readiness for therapeutic community treatment among adolescent and adult substance abusers. American Journal of Drug and Alcohol Abuse, 23(4), 485-507, (1997). The study reports findings from a large scale research of motivation and readiness differences among adolescents treated in residential therapeutic communities for illicit substance misusers. Data were collected with an instrument assessing circumstances, motivation , readiness and suitability for TC treatment (i.e., CMRS). The CMRS scores were the largest and most consistent predictors of short term across all age groups. Although confined to TC treatment samples, the findings support clinical observations about the importance of motivation and readiness factors in the treatment process, regardless of age.
http://www.emcdda.europa.eu/html.cfm/index3597EN.html
Attention Deficit Hyperactivity Disorder (ADHD) is the most prevalent, yet controversial diagnosis affecting children and young people. This study aims to inform educational practice and challenge the negative outcomes associated with ADHD by exploring the lived experience of young people and their teachers. I use Interpretative Phenomenological Analysis (IPA) making use of a paired design to explore how student-teacher dyads within a mainstream secondary school conceptualise and experience ADHD. Findings suggest participants’ conceptualisations of ADHD and associated treatment (e.g. medication) were widely varied and influenced by their personal experiences. Consequently, I advocate a bio-psycho-social understanding of the condition as beneficial for both students and teachers. Students experienced stigma and isolation but benefitted from positive relationships with teachers. Teachers found it difficult to assess the need for a different approach when teaching students with ADHD, but also recognised positive relationships as factors to enable student’s success. This study offers a unique contribution to the substantive topic, and original application of a multi-perspective IPA design. Implications for professional practice are discussed and I invite further research to build upon the current findings by addressing the experience of female students with ADHD, wider samples of secondary school teachers, and further multi-perspective designs.
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.715642
A REVIEW OF THE EVIDENCE OF LEGACY OF MAJOR SPORTING EVENTS This review considers the evidence for legacy from major sporting events. It looks across the four themes of the Commonwealth Games Evaluation Project (flourishing, sustainable, active and connected). 8. Areas for future research 8.1 There are some areas which would benefit from further research. On the whole, the evidence base would be improved by more research into the factors which make successful legacies more likely, and on the potential negative consequences of major events. 8.2 There remains a need for further long-term research across each of our themes. More research is required in order to understand the factors which make successful legacies more likely over the long term and to ascertain how transferable the findings are from one event to another. 8.3 There is a particular lack of research into the cultural activities associated with major events, and how these can impact on communities. The evidence base would be improved by further research into long-term impacts of major sporting events on cultural engagement, civic pride and enhanced learning. 8.4 The evidence base would also benefit from longitudinal research amongst host populations. To date there have been no long-term studies which track how the local population is affected over time. In order to capture as wide a range of effects as possible, the evidence suggests that it is important to begin these research projects early in the process, and to continue collecting evidence well after the event. 8.5 Moreover, a greater understanding of the unintended consequences of major sporting events would be valuable. There is significant scope for increasing our understanding of the disparate effects of major events that are not intended by event planners. 8.6 Lastly the evidence base requires more consideration of the effects from 'second tier' events, or those smaller than the Olympics or World Cup. More research into the scale of the effects relative to the event size would help our understanding of the likely legacy of slightly smaller events such as the Winter Olympics and Commonwealth Games.
https://www.gov.scot/publications/review-evidence-legacy-major-sporting-events/pages/9/
In this manner, Is countless infinite? Innumerable things are infinite. Things that are countless, multitudinous, myriad, numberless, uncounted, or unnumerable are also called innumerable: you couldn't count them if you tried. Correspondingly, What is the difference between many and infinite?. Whatever is finite, as finite, will admit of no comparative relation with infinity; for whatever is less than infinite is still infinitely distant from infinity; and lower than infinite distance the lowest or least cannot sink. Boundless, endless, without end or limits; innumerable. ... With plural noun: infinitely many. One may also ask, What is the difference between countable and infinite? Sometimes, we can just use the term “countable” to mean countably infinite. But to stress that we are excluding finite sets, we usually use the term countably infinite. Countably infinite is in contrast to uncountable, which describes a set that is so large, it cannot be counted even if we kept counting forever. What is the difference between infinite and forever? As nouns the difference between infinity and forever is that infinity is (label) endlessness, unlimitedness, absence of end or limit while forever is an extremely long time. Does forever mean infinite? Infinity is forever. ... You've probably come across infinity in mathematics — a number, like pi, for instance, that goes on and on, symbolized as ∞. Astronomers talk about the infinity of the universe, and religions describe God as infinity. Which is more eternity or infinity? What is the difference between Eternity and Infinity? Eternity is a concept that is temporal in nature and applies to things that are timeless. Infinity is a concept that applies to things that cannot be counted or measured. ... There is neither a beginning nor an end to eternity. Is Omega larger than infinity? ABSOLUTE INFINITY !!! This is the smallest ordinal number after "omega". Informally we can think of this as infinity plus one. Is multiples of 5 finite or infinite? The set of numbers which are the multiples of 5 is: an infinite set. Are multiples of 6 finite? Answer is Infinite multiples. Do numbers end? The sequence of natural numbers never ends, and is infinite. ... So, when we see a number like "0.999..." (i.e. a decimal number with an infinite series of 9s), there is no end to the number of 9s. You cannot say "but what happens if it ends in an 8?", because it simply does not end. What does an infinite solution look like? When a problem has infinite solutions, you'll end up with a statement that's true no matter what. For example: 3=3 This is true because we know 3 equals 3, and there's no variable in sight. Therefore we can conclude that the problem has infinite solutions. You can solve this as you would any other equation. What does multifarious mean in English? : having or occurring in great variety : diverse participated in multifarious activities in high school. What does Massacrating mean? Meaning of massacring in English to kill many people in a short period of time: Hundreds of civilians were massacred in the raid. Does countless mean innumerable? The adjective "countless" is defined as "too many to count; innumerable; myriad." If you want to make the case that you're using it as a synonym for "myriad," please be prepared to prove that you're speaking of an "indefinitely large number." How do you know if its finite or infinite? - If a set has a starting and end point both then it is finite but if it does not have a starting or end point then it is infinite set. - If a set has a limited number of elements then it is finite but if its number of elements is unlimited then it is infinite. Is 0 finite or infinite? Zero is a finite number. When we say that a number is infinite, it means that it is uncountable, limitless, or endless. What is finite example? The definition of finite is something that has a limit that can't be exceeded. An example of finite is the number of people who can fit in an elevator at the same time. What is the biggest number that we know? We're starting off with the very impressive googol, which is 10100 (or if you're writing the actual number out, it's 1, followed by 100 zeros). To illustrate how enormous a googol is, the video above explains that it's actually larger than the number of atoms in your body. Is Google bigger than infinity? It's way bigger than a measly googol! Googolplex may well designate the largest number named with a single word, but of course that doesn't make it the biggest number. ... True enough, but there is nothing as large as infinity either: infinity is not a number. It denotes endlessness. How big is a Googolplexianth? Googolplex - Googolplex.com - 1000000000000000000000000000000000 etc. Googol: A very large number! A "1" followed by one hundred zeros. What does I love you to Eternity mean? eternity Add to list Share. ... Eternity means "time without end, or infinity," like people who promise to love one another for eternity — they aren't planning to ever split up. Who chained Eternity? Left weak after his death and resurrection, Eternity was chained up by the First Firmament, the first universe to ever exist. It had been watching from the void as the Multiverse passed through each renewal cycle in the hopes that one day it would reclaim his place as everything that is. Is there a symbol for Eternity? Formed as a sideways figure-eight, the infinity symbol is also called the eternity or the forever symbol. The two circles forming the eight appear to have no identifiable beginning or end. The symbol has its origins in mathematics, when the mathematician John Wallis chose it to represent the concept of infinity.
https://moviecultists.com/are-countless-and-infinite-the-same
For any set of points P, the minimum area of an enclosing rectangle with sides parallel to the x and y axes is equal to the product of the difference between the largest and smallest x-coordinates in P and the difference between the largest and smallest y-coordinates in P. For a proof sketch, note that the projection of any such rectangle onto the x-axis must contain the interval [min x-coordinate in P, max x-coordinate in P], or it cannot have contained all points in P. Now consider the four largest x-coordinates (allowing repeats) over all points in P. After we remove at most 3 points, the resulting largest x-coordinate must be one of these four values. Similarly, after we remove at most 3 points, there are only four possible values for the smallest x-coordinate, the largest y-coordinate, and the smallest y-coordinate. Since there are just four candidates for each side of the new rectangle, there are at most $4^4 = 256$ possible rectangles that could result from removing 3 points! For each candidate rectangle, we iterate through all points in P to count how many points lie outside of it. If this count is less than or equal to 3, we have a valid rectangle, and should compute its area. The final answer is the minimum of all valid rectangle areas.
https://www.usaco.org/current/data/sol_reduce_silver_open16.html
Range, when talking about math, is all the possible y values. math A you minus the greatest number by the least and the answer is the range the range is the maximum minus the minimum. what does range mean in fifth grade math? math landmarks are mean,median,mode,range in math, domain is the set of possible inputs to a function while range is the set of possible outputs. To get the range in math, you take the lowest number in a set of numbers, and subtract it from the highest number. Example: 1,3,5,7,9 9-1=8 The range is 8 for this set of data. The range in math is the maximum subtracted by the minimum. The range is the y value like the domain is the x value as in Domain and Range. Vast. operatition The range is the difference between the largest and the smallest number of a set. Range is the greatest of a set of numbers-the least of a set of numbers. The range is the Biggest number and the smallest number subtracted. EX: 12,18,34,24,64,53,24,25,64 A. 52 Q.What is your number sentence? A. 64 - 12 = 52 There you go, another math sentence! `Eric.Ac3 difference between highest and lowest subtract the minimum from the maximum negative infinity to infinity Banana The range of a group of numbers is the difference between the highest and lowest numbers Range is defined as all possible y values in a relation. They use math to calculate the bullets speed, accuracy, and range A range used as a math term means subtracting the highest value from the lowest value. the set of values that a given function can take as its argument varies.
https://math.answers.com/Q/How_do_you_do_the_range_in_math
4. A chain of causes cannot be infinite. to take away the cause is to take away the effect. Therefore, if there be no first cause among efficient causes, there will be no ultimate, nor any intermediate, cause. What is infinite causal chain? (4.1) Any infinite cause/effect chain would have no first member (no “first cause”). [by definition] (4.2) If a causal chain has no first member, then it will have no later members. [since to take away the cause is to take away the effect] (4.3) But there exists a causal chain with later members. Is an infinite regress of causes possible? The mere existence of an infinite regress by itself is not a proof for anything. So in addition to connecting the theory to a recursive principle paired with a triggering condition, the argument has to show in which way the resulting regress is vicious. Is infinite regress a fallacy? The fallacy of Infinite Regress occurs when this habit lulls us into accepting an explanation that turns out to be itterative, that is, the mechanism involved depends upon itself for its own explanation. What is infinite regress in the cosmological argument? An infinite regress is an infinite series of entities governed by a recursive principle that determines how each entity in the series depends on or is produced by its predecessor. An infinite regress argument is an argument against a theory based on the fact that this theory leads to an infinite regress. Is time finite or infinite? As a universe, a vast collection of animate and inanimate objects, time is infinite. Even if there was a beginning, and there might be a big bang end, it won’t really be an end. The energy left behind will become something else; the end will be a beginning. Is infinity a contradiction? The paradoxes of infinity are not exclusive to lines and circles. It’s not just that “an infinite circle” is a contradiction. It’s that “an infinite X” is a contradiction, regardless of what X is. There is an underlying logical reason why actually-infinite things cannot exist. Can the past be infinite? Each year is separated from any other by a finite number of years (remember that there’s no first year). There never was a time when the past became infinite because no set can become infinite by adding any finite number of members. So, if the past is infinite, then it has always been infinite. What causes infinite regression? You are talking about an infinite regress of causes. Every cause must be proceeded by another cause ad infinitum. In philosophy, an infinite regress is an indication of absurdity. The necessity for a prime cause (uncaused by a prior cause) to combat this absurdity is an argument for the existence of god. Is infinite regress a contradiction? 1.1 Regress and Contradiction. One such kind of case is when the very same principles of a theory that generate the regress also lead to a contradiction. If this is so then it does not matter what we think about infinite regress in general, we will of course have reason to reject the theory, because it is contradictory Did Einstein think the universe was infinite? In contrast to this model, Albert Einstein proposed a temporally infinite but spatially finite model as his preferred cosmology during 1917, in his paper Cosmological Considerations in the General Theory of Relativity. Does the past still exist? In short, space-time would contain the entire history of reality, with each past, present or future event occupying a clearly determined place in it, from the very beginning and for ever. The past would therefore still exist, just as the future already exists, but somewhere other than where we are now present. Can the universe be infinite? If the universe is perfectly geometrically flat, then it can be infinite. If it’s curved, like Earth’s surface, then it has finite volume. Current observations and measurements of the curvature of the universe indicate that it is almost perfectly flat. Is infinity a paradox? The paradox arises from one of the most mind-bending concepts in math: infinity. Infinity feels like a number, yet it doesn’t behave like one. You can add or subtract any finite number to infinity and the result is still the same infinity you started with. But that doesn’t mean all infinities are created equal. Do infinities exist in nature? In practice, the supposed existence of actual infinity in nature is questionable. It seems that because we have a symbol (∞) to represent infinity, many physicists believe its appearance in a theory is no big deal: it is part of the natural order. But this is not the case. Could there be infinite limits in real life? Although the concept of infinity has a mathematical basis, we have yet to perform an experiment that yields an infinite result. Even in maths, the idea that something could have no limit is paradoxical. For example, there is no largest counting number nor is there a biggest odd or even number. Is infinity actually infinite? Aristotle’s potential–actual distinction Actual infinity is completed and definite, and consists of infinitely many elements. Potential infinity is never complete: elements can be always added, but never infinitely many. Who invented infinity? mathematician John Wallis infinity, the concept of something that is unlimited, endless, without bound. The common symbol for infinity, ∞, was invented by the English mathematician John Wallis in 1655. Is there an infinite amount of anything? This means that there is always a limit on the largest value that can be scientifically measured. So the conclusion is: science (that is, physics) cannot establish existence of infinite quantities. There is nothing physically infinite. Is Omega bigger than infinity? ABSOLUTE INFINITY !!! This is the smallest ordinal number after “omega”. Informally we can think of this as infinity plus one. Is Pi bigger than infinity? Pi is finite, whereas its expression is infinite. Pi has a finite value between 3 and 4, precisely, more than 3.1, then 3.15 and so on. Hence, pi is a real number, but since it is irrational, its decimal representation is endless, so we call it infinite.
https://goodmancoaching.nl/can-there-be-an-infinite-chain-of-causes-effects/
Rationale for having separate decimal floating-point data types. There had been a discussion after the Kona meeting about the advantage and disadvantage of having a separate set of data type for decimal floating point. Below presents the case for the model used by N1016. 1/ The fact that there are two sets of floating point types in itself does not mean the language would become more complex. The complexity question should be answered from the perspective of the user's program - that is, does the new data types add complexity to the user's code ? The answer is probably no except for the issues surrounding implicit conversions. For a program that uses only binary floating point types, or uses only decimal fp types, the programmer is still working with three fp types. We are not making the program more difficult to write, understand, or maintain. 2/ Implicit conversions can be handled by simply disallowing them (except maybe for cases that involve literals). If we do this, for CUs that have both binary and dec fp types, the code is still clean and easy to understand. 3/ If we only have one set of data types, and if we provide std pragmas to allow programs to use both representations, in a large source file with std pragma flipping the meaning of the types back and forth, the code is actually a field of land mines for the maintenance programmer, who might not immediately aware of the context of the piece of code. Since the effect of a pragma is a lexical region within the program, additional debugger information is needed to keep track of the changing meaning of data types. 4/ Giving two meanings to one data type hurts type safety. A program may bind by mistake to the wrong library, causing runtime errors that are difficult to trace. It is always preferable to detect errors during compile time. Overloading the meaning of a data type makes the language more complicated, not more simple. 5/ A related advantage of using separate types is that it facilitates the use of source checking/scanning utilities (or scripts). They can easily detect which fp types is used in a piece of code with just local processing. If a std pragma can change the representation of a type, the use of grep, for example, as an aid to understand and to search program text would become very difficult. 6/ Suppose the standard only defines a library for basic arithmetic operations. A C program would have to code an expression by breaking it down into individual function calls. This coding style is error prone, and the resulting code difficult to understand and maintain. A C++ programmer would almost definitely provide his/her own overloaded operators. Rather than having everyone to come up their own, we should define it in the standard. If C++ defines these types as class, C should provide a set of types matching the behavior. Relatively speaking, this is not a technical issue for the implementation, as it might seem on the surface initially - i.e. it might seem easier to just tag new meaning to existing types using a compiler option - but is an issue about usability for the programmer. The meaning of a piece of code can become obscure if we reuse the float/double/long double types. Also, we have a chance here to bind the C behavior directly with IEEE, reducing the number of variations among implementations. This would help programmer writing portable code, with one source tree building on multiple platforms. Using a new set of data types is the cleanest way to achieve this. Below captures the comments received on N1016. This may serve as the starting point for technical discussion. The comments are grouped under the section numbers of N1016. To faciliate referencing, they are tagged with "KONA-nn", even though not all of them were collected in the Kona meeting. The first part lists the "outstanding comments"; the second part lists those that has been applied to the current draft. But since the committee hasn't actually gone through any of them in a discussion, we do not mean the second part are already addressed. We will go through them all in the Sydney meeting, but the first part is probably those that we will spend most of the time. KONA-01 "F.4 fully defines floating to integer conversions: the value converts or raises FE_INVALID. Raising FE_INVALID does not interrupt the program, nor is it a performance hit." KONA-02 "It would be better if were changed to a Recommended Practice. Also, it would be more consistent if conversions to unsigned did the modulo wrap. As if, it were first converted to a 128-bit signed integer type and then converted to the unsigned type." KONA-03 " Assuming +infinity and -infinity are representable in the floating type, then all values are in the range of representable values. So, change 'quiet NaN' to 'infinity with the appropriate sign'." KONA-04 " Need to add words about greater precision and/or range." KONA-05 "I would rephrase the rules as finding the first type that has an adequate range and precision to meet the model numbers in float.h (i.e. ignoring any 'exceptional' numbers such as subnormals). And would specify a constraint error that a type including NaNs or infinities couldn't be converted to one without them without an explicit cast." KNOA-06 "7.6 What is the difference between floor and down? What is the difference between ceiling and up?" KNOA-07 "7.12 , why do you only provide a _Decimal32 macro for QNaN? Seems like _Decimal64 and _Decimal128 versions would also be useful?" KONA-08 "Your support for Signaling NaNs differs from WG14 paper N1011." KONA-09 "Why not remove note1 and make the normative text do the correct thing for IEEE: apply sign, then round." KONA-10 "Need to add casts to list of allowed places. Also, need to add function return." KONA-11 "Need to add TC1 of C99. It had a major impact on <fenv.h>." KONA-12 " DEC_EVAL_METHOD: Add: Except for assignment and casts (both remove any extra range and precision)" KONA-13 " Add: suitable for use in #if preprocessing directives." KONA-14 "The *_EPSILON values seem wrong. Should be something like 1e-6DF, 1e-15DF, and 1e-33DF." KONA-15 "You should also add *_DEN macro symbols for the smallest denormalized number. This is something we forgot to do to <float.h>." KONA-16 " Need to add to the part on HUGE_VAL*, something about appropriate sign. What about the rounding mode being different than round to nearest? In that case, the result is sometimes the largest finite number instead of infinity." KONA-17 "Need to add to the part on HUGE_VAL*, something about appropriate sign. What about the rounding mode being different than round to nearest? In that case, the result is sometimes the largest finite number instead of infinity." KONA-18 "Prototypes are now(?) required (implicit function declaration was removed in C99). You might mean varargs. Imaginary float is NOT promoted to imaginary double, so Dec32 should not promote to Dec64. So, remove all of the 5.5 stuff." KONA-19 "scanf needs to be able to read into _Dec32."
http://std.dkuug.dk/jtc1/sc22/wg14/www/docs/n1058.htm
- null: Null means variable has no value at all. Do not confuse null with 0 (zero)! 0 is just a number, null means just no value or a empty or non-existent reference. - undefined: A value that is undefined is a value held by a variable right after it has been created and before a value has been assigned to it. - boolean: A variable of type boolean may hold the special values true or false. If a number value is used where a boolean expression is expected, a zero value will be treated like false and any other value will be treated as true. - number: This type is a set of values representing integer and floating point numbers. In ECMAScript, the set of values represents the double-precision 64-bit format IEEE 754 values including the special values Not-a-Number (NaN), positive infinity (Infinity), and negative infinity (-Infinity). - string: A variable of type string is - formally spoken - a finite ordered sequence of at least zero 16-bit unsigned integer values. Practically it is just a string, like you may know from C++ or Java. In C it's like character array, but with 16 bit characters instead of 8 bit C-char. Single characters of a string can be accessed just like in any other major C-like programming language using square brackets. First character has index 0.
https://jsxgraph.uni-bayreuth.de/wiki/index.php?title=Datatypes_and_variables&oldid=2568
numpy.nan_to_num() in Python numpy.nan_to_num() function is used when we want to replace nan(Not A Number) with zero and inf with finite numbers in an array. It returns (positive) infinity with a very large number and negative infinity with a very small (or negative) number. Syntax : numpy.nan_to_num(arr, copy=True) Parameters : arr : [array_like] Input data. copy : [bool, optional] Whether to create a copy of arr (True) or to replace values in-place (False). The in-place operation only occurs if casting to an array does not require a copy. Default is True. Return : [ndarray] New Array with the same shape as arr and dtype of the element in arr with the greatest precision. If arr is inexact, then NaN is replaced by zero, and infinity (-infinity) is replaced by the largest (smallest or most negative) floating point value that fits in the output dtype. If arr is not inexact, then a copy of arr is returned. Code #1 : Working | | Output : Input number : nan output number : 0.0 Code #2 : | | Output : Input array : [[ 2. inf 2.] [ 2. 2. nan]] output array: [[ 2.00000000e+000 1.79769313e+308 2.00000000e+000] [ 2.00000000e+000 2.00000000e+000 0.00000000e+000]] Code #3 : | | Output : Input array : Input array : [[2 2 2] [2 2 6]] Output array: [[2 2 2] [2 2 6]] Recommended Posts: - Reading Python File-Like Objects from C | Python - Important differences between Python 2.x and Python 3.x with examples - Python | Merge Python key values to list - Python | Add Logging to Python Libraries - Python | Add Logging to a Python Script - Python | Set 4 (Dictionary, Keywords in Python) - Python | Sort Python Dictionaries by Key or Value - Any & All in Python - max() and min() in Python - pow() in Python - gcd() in Python - try and except in Python - Python | a += b is not always a = a + b - Python vs PHP If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
https://www.geeksforgeeks.org/numpy-nan_to_num-in-python/
A short introduction to microscope objective lenses and the information printed on them The enlargement is straight forward. 5x (red), 10x (yellow), 20x (green), 40x (blue) and so on shows how much the subject will be enlarged if the objective is used as intended. Using a 20x objective on a 1mm subject will enlarge it to 20mm on the camera sensor. The next number is usually the numerical aperture (NA) for example 0.14. This value can be used to calculate the resolution (r) and the Depth of Field (DOF). Resolution is the smallest object that the objective can resolve and the DOF is the largest step size that can be uses for focus stacking. Read more about NA and Depth of Field (DOF), read more about resolution. Infinity (∞) objectives are intended to be used in combination with another lens – a tube lens. This can be a lens made specifically for this purpose, but a camera tele lens with the correct focal length focused at infinity can work just fine. An objective with the marking 5x, ∞ / 0 and f = 200 printed on the barrel means that the objective will enlarge the subject 5x on the sensor if used with a tube lens with focal length 200mm focused at infinity (∞). The null (0) means that this objective is optimized to be used without a coverslip (coverslip thickness = 0 mm). Some objectives have the working distance (WD) printed. The WD is the distance from the front of the objective to the subject in mm when the objective is used as intended, in this case 17.5mm. If there is an OFN number this shows the intended maximum field of view for the microscope eyepiece, in this case it is 25mm. Usually, but not always, the image quality falls quite quickly outside a circle with the diameter of the OFN. Note: You have to test and evaluate Microscope objective have a quite narrow good image circle. You have to test and evaluate to find out what is ok for you. Note: It can be necessary to use different focal length tube lenses Say that we have an objective with OFN 25mm and a corresponding 25mm good quality image circle that we want to use on a full frame (FF) 24mm*36mm camera sensor. To match the FF sensors 43,3mm diameter with the 25mm image circle the image has to be enlarged at least 1,73x (=43,3mm/25mm). A tube lens with a focal length of 350mm will enlarge the image circle 1,75x (= 350mm/200mm) and do the job. In a microscope the enlargement of the image circle is done with a projection lens, usually a 2.5x lens. It is also possible to scale down the image with a shorter tube lens. This can be useful when the image circle is larger than the sensor diameter. Finite (non-infinity ∞) objectives can be marked with the intended tube length measured as the distance in mm between the shoulder of the microscope lens to the shoulder of the microscope ocular. In this case it is 210mm. The distance between the microscope shoulder and a camera sensor is shorter – usually but not always 10mm shorter. So 160mm is 150mm to sensor, 210mm is 200mm to sensor and so on. Using a finite objective that is intended to be used without a coverslip such as the 210/0 lens above is the same thing as using a camera objective on a bellow. But a high NA objective as this 0.80 objective is quite sensitive to the wrong tube length. Microscope photos of a butterfly scale to show the difference between the correct 210mm tube length and 160mm tube length when using a Nikon M Plan apo 40x NA 0.80 on a microscope. A 2,5x projection lens is used – so this is 100x (= 40*2,5) on sensor. 210 mm tube length viewed at 200% and 160mm tube length viewed at 300%. The red arrows show the place in focus. Objectives that are made for a coverslip have the intended coverslip thickness in mm after the infinity (∞) mark or after the tube length number. This is usually the standard 0.17mm. A “-” means that it works with no coverslip, this is usually the case for low NA lenses say up to NA 0.2. Some lenses can be adjusted to be used with coverslips of different thickness. In this case the lens is marked ∞ /0-2 so it works with no coverslip, a normal coverslip and can be used to view through the bottom of a glass petri dish that is up to 2mm thick.
https://www.hellberg.photo/gear/microscope-lenses/a-short-introduction/
Randomness and regularity are two sides of the same coin but what connects them? Kolmogorov complexity, related to both, is one of the strangest ideas of all. This is what this extract from Chapter 7 of my recent book is all about. A Programmers Guide To Theory Now available as a paperback and ebook from Amazon. Contents <ASIN:1871962439> <ASIN:1871962587> At the end of the previous chapter we met the idea that if you want to generate an infinite anything then you have to use a finite program and this seems to imply that the whatever it is that is being computed has to have some regularity. This idea is difficult to make precise, but that doesn’t mean it isn’t an important one. It might be the most important idea in the whole of computer science because it explains the relationship between the finite and the infinite and it makes clear what randomness is all about. Algorithmic Complexity Suppose I give you a string like 111111... which goes on for one hundred ones in the same way. The length of the string is 100 characters, but you can write a short program that generates it very easily: repeat 100 Now consider the string "231048322087232.." and so on for one hundred digits. This is supposed to be a random string, it isn't because I typed it in by hitting number keys as best I could, but even so you would be hard pressed to create a program that could print it that was shorter than it is. In other words, there is no way to specify this random-looking string other than to quote it. This observation of the difference between these two strings is what leads to the idea of Kolmogorov, or algorithmic, complexity. The first string can be generated by a program with roughly 30 characters, and so you could say it has 30 bytes of information, but the second string needs a program of at least the hundred characters to quote the number as a literal and so it has 100 bytes of information. You can already see that this is a nice idea, but it has problems. Clearly the number of bytes needed for a program that prints one hundred ones isn't a well-defined number - it depends on the programming language you use. However, in any programming language we can define the Kolmogorov complexity as the length of the smallest program that generates the string in question. Andrey Kolmogorov was a Russian mathematician credited with developing this theory of information but it was based on a theorem of Ray Solomonoff which was later rediscovered by Gregory Chaitin - hence the theory is often called Solomonoff-Kolmogorov-Chatin complexity theory. Obviously one way around this problem that the measure of this complexity is to use the size of a Turing machine that generates the sequence, but even this can result in slightly different answers depending on the exact definition of the Turing machine. However, in practice the Turing machine description is the one preferred. So complexity is defined with respect to a given description language – often a Turing machine. The fact that you cannot get an exact absolute measure of Kolmogorov complexity is irritating but not a real problem as any two measures can be shown to differ by a constant. The Kolmogorov complexity of a string is just the smallest program that generates it. For infinite strings things are a little more interesting because, if you don't have a program that will generate the string, you essentially don't have the string in any practical sense. That is, without a program that generates the digits of an infinite sequence you can't actually define the string. This is also the connection between irrational numbers and non-computable numbers. As explained in the previous chapter, an irrational number is an infinite sequence of digits. For example: 2.31048322087232 ... where the ... means carry on forever. Some irrationals have programs that generate them and as such their Kolmogorov complexity is a lot less than infinity. However, as there are only a countable number of programs and there are an uncountable number of irrationals – see the previous chapter - there has to be a lot of irrational numbers that don't have programs that generate them and hence that have an infinite Kolmogorov complexity. Put simply, there aren't enough programs to compute all of the irrationals and hence most irrationals have an infinite Kolmogorov complexity. To be precise, there is an aleph-zero, or a countable infinity, of irrational numbers that have Kolmogorov complexity less than infinity and an aleph-one, or an uncountable infinity of them, that have a Kolmogorov complexity of infinity. A key theorem in algorithmic complexity is: If this were not so we could generate the aleph-one set of infinite strings using an aleph-zero set of programs. The irrationals that have a smaller than infinite Kolmogorov complexity are very special, but there are an infinity of these too. In a sense these are the "nice" irrationals - numbers like π and e - that have interesting programs that compute them to any desired precision. How would you count the numbers that had a less than infinite Kolmogorov complexity? Simple just enumerate all of the programs you can create by listing their machine code as a binary number. Not all of these programs would generate a number, indeed most of them wouldn't do anything useful, but among this aleph-zero of programs you would find the aleph-zero of "nice" irrational numbers. Notice that included among the nice irrationals are some transcendentals. A transcendental is a number that isn't the root of any finite polynomial equation. Any number that is the root of a finite polynomial is called algebraic. Clearly, for a number that is the root of a finite polynomial, i.e. not transcendental but algebraic, you can specify it by writing a program that solves the polynomial. For example, √2 is an irrational, but it is algebraic and so it has a program that generates it and hence it’s a nice irrational.
https://i-programmer.info/programming/theory/13793-programmers-guide-to-theory-kolmogorov-complexity-and-randomness.html
- Differences Between nulland null and undefined data types seem to represent nothing, but they’re different. One needs to distinguish between these and know when to use which one to avoid runtime bugs. This article discusses null and undefined and their difference. null on its own. It’s a value assigned to variables that are left blank. It can be referred to as an empty value. Why Do People Confuse null and null and undefined are the same in the following aspects: - Both are primitive data types. - If you compare them using the equality operator ==, it returns true. null == undefined; // returns true undefinedand nullboth yield falsewhen used in an expression. !!undefined;//false !!null;//false Differences Between null and - Type Difference console.log(typeof(null)); // object console.log(typeof(undefined)); // undefined In the example above, we use the typeof operator to check their data types. It returns the null data type as an object and the undefined data type as undefined. Although the equality operator == returns true, the identity operator === will return false since they are equal in value but have a different data type. - Arithmetic Operations null behaves like a number with no value when performing arithmetic operations. Performing operations give results similar to as if null were 0. console.log(4+null); // 4 console.log(4*null); // 0 console.log(4-null); // 4 console.log(4/null); // Infinity It’s important to note that although it may be treated as 0 in arithmetic operations, the value of null is not 0. Meanwhile, undefined returns NaN when used in arithmetic operations.
https://www.delftstack.com/howto/javascript/javascript-null-vs-undefined/
Check the boxes below to ignore/unignore words, then click save at the bottom. Ignored words will never appear in any learning session. Ignore? program flow instructions by the computer always read each statement one at a time from the top of you program to the bottom. quick review conditional statements - if, if/else Lexical Structure the set of elementary rules that specifies how you write programs in that language Literals Fixed values are called _______ Identifiers are... ...names used to identify variables, keywords and functions; provide labels for certain loops in JS Reserved words Keywords that are reserved for the language itself and cannot be used as variable names What is the general rule about how JS interprets whitespace at the end of a line? JS treats the end of a line as a semicolon if it can't parse the second line as a continuation of the first. Primitive and objects Two categories for JS types Types The kinds of values that can be stored and manipulated in a programming language What are the five primitive types? numbers, strings, boolean, null and undefined; everything else is an object Definition of an object A collection of properties where each property has a name and a value Constructor A function that is written with the new keyword to initialize a newly created object Class When you make a new constructor, you are making a _______ Null and undefined What are the only values that cannot have methods invoked on them? Mutable vs immutable A value of a mutable type can change What data types are immutable? numbers, strings, booleans, null and undefined are immutable What does it mean that JS variables are untyped? You can assign a value of any type to a variable and you can later assign a value of a different type to the same variable JS returns infinity or negative infinity When a value becomes larger than the largest presentable number, what happens? Underflow When the result of a numeric operation is closer to zero than the smallest representable number. In this case, JS returns zero Infinity or negative infinity What does division by zero return? returns NaN What does zero divided by zero return? Not equal! How does NaN compare to any other value including itself? Why do floating point numbers only give an approximation of the accurate number?
https://www.memrise.com/course/700034/learn-javascript/22/
3: Chapter 2: More on Functions Increasing, Decreasing, & Constant Functions -As you read the graph from left to right the the function is: *Increasing: if as the x-coordinates increase the y-coordinates also increase *Decreasing: if as the x-coordinates increase the y-coordinates decrease *Constant:if as the x-coordinates increase the y-coordinates stay the same Relative Max/Min -the relative max is the highest point on a parabola that is open downward while the relative min is the opposite, the lowest point on a parabola open upward. 4: Chapter 3: Quadratic Functions and Equations; Inequalities The Complex Numbers -The Number i * i = the square root of -1 * i^2 = -1 -Complex Numbers *a complex number is a number in the form of a + bi where a and b are real numbers. The number a is said to be the real part and b is the imaginary part. -Conjugates *The conjugate of a + bi is a - bi. The numbers a + bi and a - bi are complex conjugates. 5: The Principle of Zero Products: -If ab = 0 is true, then a = 0 or b = 0, and if a = 0 or b = 0, then ab = 0 The Principle of Square Roots: -If x^2 = k, then x = the square root of k or x = the negative square root of k The Principle of Powers: -For any positive integer n: *If a = b is true, then a^n = b^n is true 6: Chapter 4: Polynomial and Rational Functions Leading Term Test: -if an x^n is the leading term of a polynomial function, then the behavior of the graph as x---->infinity or as x---->negative infinity can be described in one of the four following ways 7: The Intermediate Value Theorem: -For any polynomial function P(x) with real coefficients, suppose that for a not equal to b, P(a) and P(b) are of opposite signs. Then the function has a real zero between a and b. The Remainder Theorem - If a number c is substituted for x in the polynomial f(x), then the result of f(c) is the remainder that would be obtained by dividing f(x) by x - c. That is, if f(x) = (x - c) * Q(x) + R, then f(c) = R The Factor Theorem -For a polynomial f(x), if f(c) = 0, then x - c is a factor of f(x). The Fundamental Theorem of Algebra -Every polynomial function of degree n, with n > or = 1, has at least one zero in the set of complex numbers Determining Vertical Asymptotes - For a rational function f(x) = p(x)/q(x), where p(x) and q(x) are polynomials with no common factors other than constants, if a is a zero of the denominator, then the line x = a is a vertical asymptote for the graph of the function. 8: Chapter 9: Systems of Equations and Matrices Row Equivalent Operations -1. Interchange any two rows 2. Multiply Each entry in a row by the same nonzero constant. 3.Add a nonzero multiple of one row to another row. Row-Echelon Form -1. If a row does consist entirely of 0's, then the first nonzero element in the row is a 1 (called the leading 1) 2. For any two successive nonzero rows, the leading 1 in the lower row is farther to the right than the leading 1 in the higher row 3. All the rows consisting entirely of 0's are at the bottom of the matrix If a fourth property is also satisfied, a matrix is said to be in reduced-row echelon form 4. Each column that contains a leading 1 has 0's everywhere else 9: Chapter 10.7: Parametric Equations Parametric Equations - If f and g are continuous functions of t on an interval I, then the set of ordered pairs (x, y) such that x = f(t) and y = g(t) is a plane curve. The equations x = f(t) and y = g(t) are parametric equations for the curve. the variable t is the parameter.
https://www.mixbook.com/photo-books/education/a-year-in-review-5376567
Replace nan with zero and inf with finite numbers. Returns an array or scalar replacing Not a Number (NaN) with zero, (positive) infinity with a very large number and negative infinity with a very small (or negative) number. Parameters: x : array_like Input data. Returns: out : ndarray New Array with the same shape as x and dtype of the element in x with the greatest precision. If x is inexact, then NaN is replaced by zero, and infinity (-infinity) is replaced by the largest (smallest or most negative) floating point value that fits in the output dtype. If x is not inexact, then a copy of x is returned. See also Notes Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity.
https://docs.scipy.org/doc/numpy-1.9.1/reference/generated/numpy.nan_to_num.html
En otras palabras, siempre tendremos triángulos infinitesimales y es por esto que se obtiene una aparente paradoja visual. Dada que esta relación sí se cumple en orden cuadrático, es de esperar que el resultado sea verdadero, no acerca de los perímetros, sino sobre las áreas, como bien lo sugirió Mario Blanco. Esto es cierto dado que al remover los cuadrados se obtienen una suma de Riemann correspondiente al área del círculo. Es de notar que luego de la segunda etapa, los cuadrados obtenidos en la $n$-ésima etapa no tendrán todos el mismo lado. Por lo tanto es un tanto complejo escribir una serie que describa la suma de Riemann, sin embargo, sabemos que los lados de los cuadrados tienden a 0 cuando $n$ tiende a infinito, y esto asegura la existencia del límite y que sea igual a $1/4-\pi/4$. Inicialmente quise escribir la serie, pero no me gustaría tener 20 páginas de ecuaciones, hay series más bonitas que describen $\pi$. His comment was not about the result itself, but about the little effort made by mathematicians to clarify this voodoo. This result is very counter-intuitive as it states that if you add all positive integers, you will not only get a negative quantity, but also a fraction! I remember speaking about this result a couple years ago with a friend who was skeptical about it and he was telling me that it shouldn't be true since if he grabbed a calculator and start adding 1+2+3+4+..., he wouldn't approach a negative number, on the contrary, it would become bigger and bigger! The trick lies in realizing we are talking about a mathematical object called a series, which is essentially different that any usual finite sum. One thing that can help to see that series are different than just adding a finite number of terms is that sometimes the order in which we add makes a difference! This is called Riemann rearrangement theorem, and states that if a series is conditionally convergent, then we can permute its terms to make it converge to any real number, or to make it diverge. There are several concepts that play important roles here. One of them is the idea of dealing with an infinite number of things. It is not natural to operate with an infinite number of objects and a major problem is that we cannot directly apply an algorithm to compute this, as one of the key things of algorithms is that they must end, in other words, they must have a finite number of steps. is not how much is it? but rather, what is it? The traditional approach for computing infinite series is by means of sequences of partial sums. This means that since we don't know what does it mean to add an infinite number of things, we approach infinity in the potential infinity sense, which establishes that infinity is the ability of taking an increasing sequence of big numbers forever. In other words, we take a big number of terms, we add them, and we assume that this result should somehow be close to the value of the series. Then we take a bigger amount of terms, and a bigger, and we keep doing this. Eventually, if the results obtained from the finite sums tend to cluster around a value, we say that that value is the result of the infinite series. Of course writing $\sum 1/2^n=1$ is shorter and conveys the same idea, or at least that's what lazy mathematicians think. Again, the equal sign used here should not be thought as a comparison between two objects but rather as assigning a value to the series. This assignment is very intuitive and we could say is very natural to think that this is a reasonable way to approach defining the meaning of an infinite series. But this it's not the only option. Deeper questions arise when this method of partial sums don't provide an answer, as it is the example of $\sum n$. The easy way out is simply to say there is no answer and the series diverges. But just as it happened with the equation $x^2=-1$, mathematicians saw an opportunity to extend the theory and also assign values to series for which the partial sums method is not enough. It is possible to use the idea of actual infinity instead of potential infinity in order to approach these objects. An actual infinity approach considers having all infinite terms at the same time, as opposed to just a never-ending source of terms. Euler and Leibniz first started to develop ideas around divergent series and one of their key insights was to look at the meaning of a sum rather than its value. Sometimes we abuse of the notation and we prefer to write it down as $\sum(-1)^n=1/2$, but we have to stress the fact that "=" does not represent a comparison between objects but rather a correspondence. to our series. Assign, not evaluate.
http://towardsthelimitedge.pedromoralesalmazan.com/2017/
This type implements IComparable, IFormattable, System.IComparable<System.Single>, and System.IEquatable<System.Single>. Represents a 32-bit single-precision floating-point number. Single is a 32-bit single precision floating-point type that represents values ranging from approximately 1.5E-45 to 3.4E+38 and from approximately -1.5E-45 to -3.4E+38 with a precision of 7 decimal digits. The Single type conforms to standard IEC 60559:1989, Binary Floating-point Arithmetic for Microprocessor Systems. The finite set of non-zero values of the form s * m * 2e, where s is 1 or -1, and 0 < m < 224 and -149 <= e <= 104. Positive infinity and negative infinity. Infinities are produced by operations that produce results with a magnitude greater than that which can be represented by a Single, such as dividing a non-zero number by zero. For example, using Single operands, 1.0 / 0.0 yields positive infinity, and -1.0 / 0.0 yields negative infinity. Operations include passing parameters and returning values. The Not-a-Number value (NaN). NaN values are produced by invalid floating-point operations, such as dividing zero by zero. Then, if either of the operands is of type Double, the other operand is converted to Double, and the operation is performed using at least the range and precision of the Double type. For numeric operations, the type of the result is Double. If the magnitude of the result of a floating-point operation is too large for the destination format, the result of the operation is positive infinity or negative infinity. Conforming implementations of the CLI are permitted to perform floating-point operations using a precision that is higher than that required by the Single type. For example, hardware architectures that support an "extended" or "long double" floating-point type with greater range and precision than the Single type could implicitly perform all floating-point operations using this higher precision type. Expressions evaluated using a higher precision might cause a finite result to be produced instead of an infinity. Returns the sort order of the current instance compared to the specified Single . The Single to compare to the current instance. Any negative number Current instance < value. Current instance is a NaN and value is not a NaN. Zero Current instance == value . Current instance and value are both NaN, positive infinity, or negative infinity. Current instance is not a NaN and value is a NaN. Current instance is a NaN and value is not a NaN and is not a null reference. value is a null reference. ArgumentException value is not a null reference and is not of type Single. true if obj represents the same type and value as the current instance, otherwise false . If obj is a null reference or is not an instance of Single, returns false . If either obj or the current instance is a NaN and the other is not, returns false . If obj and the current instance are both NaN, positive infinity, or negative infinity, returns true . Determines whether the current instance and the specified Single represent the same value. true if obj represents the same value as the current instance, otherwise false . If either obj or the current instance is a NaN and the other is not, returns false . If obj and the current instance are both NaN, positive infinity, or negative infinity, returns true . Determines whether the specified Single represents an infinity, which can be either positive or negative. The Single to be checked. true if f represents a positive or negative infinity value; otherwise false . Determines whether the value of the specified Single is undefined (Not-a-Number). true if f represents a NaN value; otherwise false . Determines whether the specified Single represents a negative infinity value. true if f represents a negative infinity value; otherwise false . Determines whether the specified Single represents a positive infinity value. true if f represents a positive infinity value; otherwise false . Returns the specified String converted to a Single value. A String containing the value to convert. The string is interpreted using the System.Globalization.NumberStyles.Float and/or System.Globalization.NumberStyles.AllowThousands style. The Single value obtained from s. If s equals System.Globalization.NumberFormatInfo.NaNSymbol, this method returns System.Single.NaN . OverflowException s represents a value that is less than System.Single.MinValue or greater than System.Single.MaxValue. This version of System.Single.Parse(System.String) is equivalent to System.Single.Parse(System.String)(s, System.Globalization.NumberStyles.Float| System.Globalization.NumberStyles.AllowThousands, null ). A String containing the value to convert. The string is interpreted using the style specified by style . Zero or more NumberStyles values that specify the style of s. Specify multiple values for style using the bitwise OR operator. If style is a null reference, the string is interpreted using the System.Globalization.NumberStyles.Float and System.Globalization.NumberStyles.AllowThousands styles. This version of System.Single.Parse(System.String) is equivalent to System.Single.Parse(System.String) (s, style, null). This version of System.Single.Parse(System.String) is equivalent to System.Single.Parse(System.String) (s, System.Globalization.NumberStyles.Float | System.Globalization.NumberStyles.AllowThousands , provider). The Single value obtained from s. If s equals System.Globalization.NumberFormatInfo.NaNSymbol, this method returns NaN. The string s is parsed using the culture-specific formatting information from the NumberFormatInfo instance supplied by provider. If provider is null or a NumberFormatInfo cannot be obtained from provider , the formatting information for the current system culture is used. A String containing a character that specifies the format of the returned string, optionally followed by a non-negative integer that specifies the precision of the number in the returned String. The following table lists the format characters that are valid for the Single type. [Note: For a detailed description of the format strings, see the IFormattable interface. This version of System.Single.ToString is equivalent to System.Single.ToString (null , provider ). This version of System.Single.ToString is equivalent to System.Single.ToString (null , null ). [Note: The general format specifier formats the number in either fixed-point or exponential notation form. For a detailed description of the general format, see the IFormattable interface. A String representation of the current instance formatted as specified by format . The string takes into account the current system culture. This version of System.Single.ToString is equivalent to System.Single.ToString (format, null ). The following example shows the effects of various formats on the string returned by System.Single.ToString . Represents the smallest positive Single value greater than zero. The value of this constant is 1.401298E-45. Contains the maximum positive value for the Single type. The value of this constant is 3.40282346638528859E+38 converted to Single . Contains the minimum (most negative) value for the Single type. The value of this constant is -3.40282346638528859E+38 converted to Single . Represents an undefined result of operations involving Single . Not-a-Number (NaN) values are returned when the result of a Single operation is undefined. A NaN value is not equal to any other value, including another NaN value. The value of this field is obtained by dividing Single zero by zero. Represents a negative infinity of type Single . The value of this constant can be obtained by dividing a negative Single by zero. Represents a positive infinity of type Single. The value of this constant can be obtained by dividing a positive Single by zero.
http://dotgnu.org/pnetlib-doc/System/Single.html
If there is one thing our basically empirical culture does not do well, it is the concept of Infinity. Mention it and you’ll probably encounter quite a few raised eyebrows. It is, after all, largely associated with religion – or at least with the idea of an infinite God. But it seems to me that the idea of God is something we human beings have given shape to, while the infinite is without any shape or form whatsoever. In other words, our collective imagination has domesticated the concept of infinity through its picture of God – the Loving Father, The Compassionate and Merciful Allah, the Vengeful Jahweh, or my own Divine Joker. Infinity itself has never been given a space of its own to expand in without being yoked to someone’s idea of an ‘infinite God’. I have suggested elsewhere that the number 5 – or any number you can name – implies a series that stretches from zero to infinity as its background. Yet that infinity is an impossibility, a paradox – a point which Kant made in the Antonomies section of his Critique Of Pure Reason. What is paradoxical about it is that from any empirical perspective, infinity cannot exist, because, however far you take your series of numbers, you can always go further and never reach the end and complete it. Empirical thought loves things with limits, and with infinity you encounter something without limits. On the other hand, Logic demands that infinity must exist for precisely that reason. Cannot and Must are therefore in conflict. What is scandalous about Infinity is that it constitutes an absence of closure – a negation, if you like, of everything finite and familiar. It does to thought, therefore, what Pi – or any irrational number - does to an algorithmic equation. In other words, it is the dark shadow of everything finite, like some kind of trace it will never get rid of. Imagine being shadowed by that which negates you by surpassing you towards an always receding horizon. It is bound to make you feel somewhat superfluous. But so far, we have only dealt with the Infinite as a mathematical concept, which is purely skeletal and lifeless as far as it goes. Surely, there must be more to the concept than this, something which gives flesh to it and also breathes life into it, filling it out, connecting it to ourselves as living beings and the whole of the cosmos of which we are a part – if by the term “cosmos” we imply something more than the physical universe(s) that scientists study. (I say nothing of String-Theory, Brane-Theory, ten, eleven or more dimensions, or the infinite number of parallel universes which split into two or come into being whenever decisions are made involving an either/or choice.) It is on this level of putting flesh onto our mathematical skeleton of infinity that we might be tempted to bring back the hypothetical notion of “God”. I have no objection to this, as long as it is the “God” of Spinoza (or the Spanish Sufi, Ibn al Arabi) – that is to say the “God” which equates with everything else that is normally considered not to be “God”. Spinoza put it this way. “God” and the universe – or, in our case, the cosmos – are one and the same, and they logically must be, because an infinite “God” could not co-exist with anything else. If a separate infinite “God” existed, the universe itself would be pushed out of existence – unless, of course, the universe was an integral part of that infinite “God” – another mode of ‘his’ being as it were. A very logical chap was Spinoza, not really cut out to believe in the mumbo-jumbo most religious people believe in – in fact he said as much himself, “Religion is organized superstition. It is based on the fears of naive ordinary people in the face of unpredictable nature, and clever power-hungry leader-types use those fears to control people.” – and this fact has made him persona non grata to priests and rabbis ever since. (He was excommunicated from the Amsterdam Synagogue.) It is through “God”, by way of Spinoza, that we might come back to the idea of more a rounded infinity permeating every finite part of itself, myself included, along with the pen I hold in my hand. It makes sense to me to think of everything as part of a continuum which absorbs everything else. What I have never found convincing in the natural and physical sciences is the way everything seems to be neatly divided from everything else – dogs from cats, atoms from other atoms, plants from animals, rocks from water, water from air, me from you and ultimately what people tend to call ‘mind’ from something else they tend to call ‘matter’ and place them all in discrete categories, thereby putting them into neat little boxes. The only relations such discrete things can have with each other in such a cosmos could be ‘interactions’ with other discrete things – that is to say relations that come from outside. This paradigm appears to be breaking down in Quantum Mechanics with ideas like Quantum Entanglement, but it is still very entrenched elsewhere in science. But back to infinity. I am far from believing in any kind of God. Would ‘God’ not also be another discrete entity, separate from everything else? The point that it is important to establish is that if “the infinite permeates every finite part of itself,” the finite and the infinite must be of the same basic nature with the same characteristics. The only difference is that one has been raised to an infinite power, while the other’s powers are finite. We are assuming, of course, that the cosmos was not created by a creator ‘God’ who pre-existed ‘his’ creation and continues to exist outside of it, having framed its laws and set its co-ordinates to ‘his’ satisfaction and then, to quote Antonin Artaud, “gets the fuck out and leaves the cops to keep an eye on things.” We cannot, of course, reduce even the finite parts of this infinite whole to our own perspective. What we might perceive and what actually is can never be identical. We see through a glass darkly as it were. The thing-in-itself, to borrow from Kant, is inaccessible to us. Nevertheless, I believe that we can infer from the fact that the infinite permeates everything finite that the cosmos in both its finite and infinite modes remains the same cosmos and shares the same nature – one in a finite mode and the other in an infinite mode. Reality is everywhere the same, in other words, and this everywhere extends beyond finite horizons. If this is true, everything in the cosmos – including whatever it is that underlies our own consciousness – is shared by everything else. All ‘matter’ in other words has a ‘mental’ or ‘proto-mental’ dimension. The difference is only one of degree, not kind. For example, what organises matter on an atomic plane is of the same nature as what organises matter in the human brain, giving rise to human consciousness. Atoms are simple, though they may well be complex compared to the sub-atomic particles of which they are composed. The human brain, on the other hand, is much more complex – as befits the tasks it has to perform and functions it has to fulfil. However, brains and atoms share in the same underlying nature, and embody the same impulse towards self-organisation. And that is probably true throughout the whole infinite cosmos and not just our finite section of it. The whole of being, therefore, is, to use Sartrean-Hegelian terms, in some way being-for-itself when viewed from within and only being-in-itself when it is viewed from the outside. Hegel distinguished between Good Infinity and Bad Infinity. Good Infinity was apparently circular, doubling back on itself, while Bad Infinity was linear and just went on and on and on forever – as a kind of interminable extension of the finite. I am not sure, but in suggesting that the infinite permeates every finite part of itself, I basically agree with Hegel's Good Infinity rather than his Bad Infinity, and that’s certainly a turn up for the books because I never thought I’d ever agree with Hegel on anything.
https://www.chanticleer-press.com/blog/chasing-infinity
The Basics of Infinity Mathematics The fear of impending doom associated with infinity mathematics has largely been removed by detailed foundational work, and most mathematicians are satisfied with the completed infinities. Still, some intuitionists and finitists persist. Let's examine the basics of infinity mathematics and why it's important to understand it. Afterward, we will look at Quantitative notions of infinity and their relationship to the continuum hypothesis. Negative infinity Infinity is a nebulous concept. It can exist when the limits of certain curves extend beyond a limit. This concept is discussed in the field of mathematics. The following are some common misconceptions about negative infinity. Learn about these misconceptions to avoid making the mistake that others have made. You can also find helpful examples of negative infinity in the world of mathematics. Read on to learn more! But, first, let's clarify what infinity really is. The term "non-negative" is the opposite of "positive infinity". It is the result of performing arithmetic operations on a positive and negative integer. Using the "=" operator to test whether the two values are equal is an easy way to determine the difference between the two. Also, it's possible to use the NaN macro to set a variable to infinity and NaN. But if you don't have access to these two types of numbers, you should use the isnan function instead. Positive infinity is an extreme case of a mathematical concept. In this case, the value is positive and unimaginably large. But, there's a catch: a negative infinity exists in reality! If a positive number is greater than a negative one, it's an infinite amount. However, it's not the same as a positive one. The real number line (along with the y-axis) has no "end" or "zero" on it. This means that any number in the universe will exist forever. The concept of negative infinity emerged from a flash experience in Italy in October 2018. It happened while Aspero was on vacation in Italy. He was working on a proof for sizes of infinity when he experienced the phantasm. After he returned to his lab, he contacted his collaborator Ralf Schindler to discuss the new insight. Both men were incredulous at first, but they worked together to turn the insight into logic. Modern mathematics accepts negative infinity as real and accepts that it exists. To make this concept more concrete, it uses a mathematical concept called hyper-real numbers. These are numbers that contain both ordinary (finite) numbers and infinite numbers of various sizes. That way, we can manipulate and study infinite objects. If you're unsure about what infinity actually is, don't hesitate to contact us. Our expert team will be glad to help! Mobius strips The Mobius strip is a non-orientable surface that can be formed by rotating a developable surface. After stretching, the Mobius strip tries to return to the minimum amount of elastic energy. When the Mobius strip is stretched, its shape changes depending on its size. The following diagram shows the Mobius strip in its most basic form. However, it can be expanded to any shape by adding a half-twist to one side. This Mobius strip is often used in jewelry designs, such as engraved pieces. It is also a popular motif for scarves, tattoos, and necklaces. Mobius strips have been used in science fiction and in artwork, including in the recycling symbol and M. C. Escher. Many architectural concepts have been based on Mobius strips. If you want to learn more about this amazing strip, read this article. The Mobius strip is a single sided surface with no boundaries. It is the most famous mathematical conundrum and is an artist's reverie. To understand what this'strip' is all about, consider an ant on an adventure in space and time. At halfway through its full circuit, it would be upside down. After two loops, it would be back at the beginning. The ant would eventually reach the end of the line, and it would be back where he started. One way to understand the concept of Mobius strips is to make your own. You can do this by cutting paper into strips and then connecting the ends with tape. The Mobius strip is a great tool for learning about different mathematical principles. Whether you are studying in a college or an advanced math course, the Mobius strip is a great tool for learning. And, it can be created by you or a student. To create your own Mobius strip, you can use a square piece of paper or even a small piece of cardboard. The Mobius strip is so remarkably versatile that it's used in many different products, from typewriter ribbons to computer print cartridges. Even Sandia Laboratories used Mobius bands in the design of adaptable electronic resistors. You can also make one at home with a few simple materials, including a pencil and tape. The continuum hypothesis Infinite mathematics is a complex area of mathematical theory, involving infinity of all variables. The continuum hypothesis, as formulated by David Hilbert, states that a set of real numbers has a cardinality of A1. Likewise, a set of natural numbers must be A2-cardinal, or A1-bijective with itself. A set of real numbers can be countably infinite, but it may also be uncountable. In 1938, Kurt Godel proved that the continuum hypothesis does not contradict the ZFC-axioms used by mathematicians in their everyday reasoning. The proof isn't an actual proof that Godel's theory is true. Rather, it describes the continuum hypothesis as true if it were proven true. It is also possible to prove that the continuum hypothesis is untrue. The continuum hypothesis was first proposed in 1878. A team of mathematicians proved the continuity hypothesis in a specific category of sets, known as Borel sets, in which most mathematical sets are contained. After the discovery of these concrete sets, mathematicians speculated that the continuum hypothesis is not true in general. Hilbert believed that the ultimate glory of mankind lay in solving the continuum hypothesis. As far as he was concerned, "we must know and we will know" in 1930. Hilbert's original question was "is there any subset of the real line of n subsets?" Then, he claimed that Hilbert was asking the wrong question and that the solution is simple: the universe is an infinite set of subsets. This sparked the development of the pcf-theory, which reverted Hilbert's independence results in cardinal arithmetic. This theory also allows for provable bounds for the exponential function. The continuum hypothesis is related to many statements in analysis, such as axiomatics, topology, and measure theory. Furthermore, it is independent of the ZFC. Therefore, it is impossible to prove the continuum hypothesis if the ZFC contains intermediate-sized sets. While Hilbert's problem is not yet fully resolved, its negative result has continued to stimulate further research. This paradox is not yet considered closed to the mathematical community, and there is still no consensus about whether it will ever be solved. Quantitative notions of infinity The term infinity is used to compare the size of an infinite set. This infinite set can be any number or set of points on a line. When mathematicians first encounter this concept, they are struck by the way it contradicts their intuitive notions of numbers. For instance, medieval thinkers were aware of the paradox of two concentric circles, one with a radius twice as large as the other. In many ways, these two concepts can be used to define infinity. According to the Parmenidian theory, infinity is the limit of a domain of variation. Hence, infinity cannot exist in two places at the same time. On the other hand, Platonic theory states that the domain of variation must be determined completely. Therefore, the notion of infinity must be defined in both theory and practice. The idea of infinity has a deep philosophical background. It is difficult to imagine a philosophical debate about infinity without considering its relationship with mathematics. Aristotle, for instance, conceived infinity in a different way. His view of infinity is still influential in contemporary disputes. Infinity is a privation. In other words, it is not a limitless state of perfection. So, what is the difference between perfection and infinity? Aristotle's concept of infinity was criticized by Cantor in the eighteenth century, a reaction against the anti-metaphysical program. Cantor and others used methods of set theory and theorems to confront the difference between the two types of infinity. But, both views were ultimately in conflict with each other. This is one of the main reasons why modern mathematicians are struggling with infinity. In this sense, the concept of infinity in mathematics can be conceptualized as two-dimensional, or as a three-dimensional space. In both cases, an infinite object can be a closed totality or a set of infinite objects. The latter view also allows the human mind to recognize groups of objects. Thus, the concept of infinite objects is inherently difficult to grasp. Thus, it is crucial to be able to comprehend it properly in terms of the universe.
https://eu-artech.org/infinity-mathematics_4639/
Previous installment: Introduction If you’re ready to explore, then, let’s play a little numbers game, shall we? No bookies, no mob bosses, and no cops to pay off –just you and me and God makes three. OK, first question: How many numbers are there between 1 and 2? If you’re thinking the answer is “none,” think again. You are thinking of integers, also known as counting numbers. Integers are the non-fractional units used to determine the quantity of elements in a finite set. “How many jelly beans in this jar? How many air miles from here to Timbuktu? How many seconds do scientists estimate have elapsed since the Big Bang?” Stuff like that. Fractional or decimal numbers, on the other hand, are typically used in counting only when parents don’t want to deal with the consequences of prescribed disciplinary measures (“1….2….don’t make me count to 3….2 and a half….2 and three-quarters….”) But they are vitally important to all other measurements, and they definitely “count” as numbers. For example, 1.3 is a number between 1 and 2, as is 1.34, and 1.3485796468…. So, how many numbers between 1 and 2 again? If you’re starting to think the answer is “unlimited,” you’ve arrived at a fascinating insight about numbers. The set of all possible integers is what the philosophy of mathematics calls a countable infinity — and for reasons that will become clear, we will also call it an immanent infinity. Picture a horizontal line with 0 at the center, with positive integers spaced evenly apart in progressive fashion on the right, and corresponding negative integers on the left. The line will continue infinitely in both directions, because no matter what integer it reaches, it can always be elongated by adding 1 if it’s a positive number or subtracting 1 if negative. Though the theoretical linear progression never ends, the pattern is established by the integers closest to 0 and it does not deviate, and that makes it a measurable spectrum, similar to linear time. Decimals are a different story, and this is where it gets interesting. For just as any positive integer can be increased by one, the number of decimal places can be increased. So the set of possible decimal places that can be affixed to any integer is also a countable infinity. But the set of possible numbers created by any integer’s set of infinite decimal places is itself not countable. There is, for example, no orderly way to progress horizontally from 3.1 to 3.2 if including all possible numeric values between them. You could say that the potential numbers keep increasing vertically on the horizontal spectrum of integers as decimal places are added. The fact that most such decimal numbers cannot be the result of an algebraic formula makes them “irrational” in mathematical terms, but it doesn’t make them unreal, and they are still potentially useful. (Among these irrational numbers is another uncountable infinite set called transcendent numbers, which have no discernible end to the quantity of decimal places. The most well-known transcendent number is 𝞹, or “pi,” which is the ratio of a circle’s circumference to its diameter. The golden ratio, observed in many patterns in nature, most commonly in spirals, is also a transcendent number. So, in case you are tempted to find the use of such numbers without practical meaning, imagine a world without perfect circles or nautilus shells.) Since “almost all” of the numbers in the vertically infinite set of values between two numbers are transcendent, we could call this type of set a transcendent infinity. Let’s be very clear about something before moving on: there is a transcendent infinity between ANY numbers. Not just integers, but decimal numbers as well. By adding one decimal place to 3.1, for instance, we can make nine possible numbers that are less than 3.2, starting with 3.11. But when we add another decimal place to 3.11, there are nine more numbers smaller than 3.12…and so on. Posit any two numbers, no matter how relatively close in proximity on any practical spectrum of measurement, and you can use runaway decimal place explosion to create another transcendent infinity, each with its own infinite set of infinities within. Any. Two. Numbers. Mind blown yet? Notice that we have described two different types of infinity, immanent and transcendent, while having yet touched upon what absolute infinity would mean. Obviously an immanent infinity excludes an infinite set of numbers because it only deals in discrete finite values with repeatable patterns. But a transcendent infinity has limitations as well. After all, the infinite numbers between 1 and 2 exclude all numbers below 1 and above 2! To arrive at a conceptualization of absolute Infinity, we must consider an infinite set of all infinite sets, both countable and transcendent, exclusive of no possible numbers. Absolute Infinity, therefore, is Number itself, or Numerality —the very potential for a numeric value to exist. If you are following me to this point, 1) you deserve some kind of medal, and 2) you now have all the tools you need to understand God and your relationship to God as an individual, on three different levels. A Holy Trinity, one might say. FOOTNOTE : “Almost all” is a fancy mathematical term for the peculiar ratio that results when an infinite set is compared to a finite one. No matter how large the finite set is, it is considered negligible by comparison, while the infinite set never loses its infinitude by being subtracted from or divided.
https://nondualmedia.org/2019/01/09/god-between-the-numbers-pt-1/
Only in set theory is it true that "the last will be first, and the first last," as stated by the conclusion of the parable. Outside of set theory that statement ostensibly appears to be false. Only if infinity exists can a master pay everyone the same wage, no matter how hard or little they work. Infinity is synonymous with God. The existence of infinity in turn implies the existence of zero, based on the inability to diminish infinity by anything more than zero. Zero -- the zero difference between the "wages" paid, and the jealousy that results -- is synonymous with the lack of God in this story. A silver lining to the parable is how it demonstrates that communism can create more jealousy than capitalism does, through the jealousy of those who work less and yet get paid as much.
https://fstdt.com/D5Z5
Toward the end of the 20th century, the standard cosmological model seemed complete. Full of mysteries, yes. Brimming with fertile areas for further research, definitely. But on the whole it held together: the universe consisted of approximately two-thirds dark energy (a mysterious something that is accelerating the expansion of the universe), maybe a quarter dark matter (a mysterious something that determines the evolution of structure in the universe), and 4 or 5 percent “ordinary” matter (the stuff of us—and of planets, stars, galaxies and everything else we had always thought, until the past few decades, constituted the universe in its entirety). It added up. Not so fast. Or, more accurately, too fast. In recent years a discrepancy has emerged between two ways of measuring the rate of the universe’s expansion, a value called the Hubble constant (H0). Measurements beginning in today’s universe and working backward to earlier and earlier stages have consistently revealed one value for H0. Measurements beginning at the earliest stages of the universe and working forward, however, have consistently predicted another value—one that suggests the universe is expanding faster than we had thought. The discrepancy is mathematically subtle but—as subtle mathematical discrepancies magnified to the spacetime scale of the universe often are—cosmically significant. Knowing the current expansion rate of the universe helps cosmologists extrapolate backward in time to determine the age of the universe. It also allows them to extrapolate forward in time to figure out when, according to current theory, the space between galaxies will have grown so vast that the cosmos will look like an empty expanse beyond our own immediate surroundings. A correct value of H0 might even help elucidate the nature of the dark energy driving the acceleration. So far measurements of the early universe looking forward predict one value for H0, and measurements from the recent universe looking backward reveal another. This sort of situation is not rare in science. Usually it disappears under closer scrutiny—and the assumption that it would disappear has reassured cosmologists for the past decade. But the disagreement has, if anything, hardened year after year, each set of measurements growing more and more intractable. And now a consensus on the problem has emerged. Nobody is suggesting that the entire standard cosmological model is wrong. But something is wrong—maybe with the observations or maybe with the interpretation of the observations, although each scenario is unlikely. This leaves one last option—equally unlikely but also less and less unthinkable: something is wrong with the cosmological model itself. For most of human history the “study” of our cosmic origins was a matter of myth—variations on the theme of “in the beginning.” In 1925 American astronomer Edwin Hubble edged it toward empiricism when he announced that he had solved a centuries-long mystery about the identity of smudges in the heavens—what astronomers called “nebulae.” Were nebulae gaseous formations that resided in the canopy of stars? If so, then maybe that canopy of stars, stretching as far as the most powerful telescopes could see, was the universe in its entirety. Or were nebulae “island universes” all their own? At least one nebula is, Hubble discovered: what we today call the Andromeda galaxy. Furthermore, when Hubble looked at the light from other nebulae, he found that the wavelengths had stretched toward the red end of the visible spectrum, suggesting that each source was moving away from Earth. (The speed of light remains constant. What changes is the length between waves, and that length determines color.) In 1927 Belgian physicist and priest Georges Lemaître noticed a pattern: The more distant the galaxy, the greater its redshift. The farther away it was, the faster it receded. In 1929 Hubble independently reached the same conclusion: the universe is expanding. Expanding from what? Reverse the outward expansion of the universe, and you eventually wind up at a starting point, a birth event of sorts. Almost immediately a few theorists suggested a kind of explosion of space and time, a phenomenon that later acquired the (initially derogatory) moniker “big bang.” The idea sounded fantastical, and for several decades, in the absence of empirical evidence, most astronomers could afford to ignore it. That changed in 1965, when two papers were published simultaneously in the Astrophysical Journal. The first, by four Princeton University physicists, predicted the current temperature of a universe that had emerged out of a primordial fireball. The second, by two Bell Labs astronomers, reported the measurement of that temperature. The Bell Labs radio antenna recorded a layer of radiation from every direction in the sky—something that came to be known as the cosmic microwave background (CMB). The temperature the scientists derived from it of three degrees above absolute zero did not exactly match the Princeton collaboration’s prediction, but for a first try, it was close enough to quickly bring about a consensus on the big bang interpretation. In 1970 one-time Hubble protégé Allan R. Sandage published a highly influential essay in Physics Today that in effect established the new science’s research program for decades to come: “Cosmology: A Search for Two Numbers.” One number, Sandage said, was the current rate of the expansion of the universe—the Hubble constant. The second number was the rate at which that expansion was slowing down—the deceleration parameter. Scientists settled on a value for the second number first. Beginning in the late 1980s, two teams of scientists set out to measure the deceleration by working with a common assumption and a common tool. The assumption was that in an expanding universe full of matter interacting gravitationally with all other matter—everything tugging on everything else—the expansion must be slowing. The tool was type Ia supernovae, exploding stars that astronomers believed could serve as standard candles—sources of light that do not vary from one example to another and whose brightness tells you its relative distance. (A 60-watt light bulb will appear dimmer and dimmer as you move farther away from it, but if you know it is a 60-watt bulb, you can deduce its separation from you.) If expansion is slowing, the astronomers assumed, at some great length away from Earth a supernova would be closer, and therefore brighter, than if the universe were growing at a constant rate. What both teams independently discovered, however, was that the most distant supernovae were dimmer than expected and therefore farther away. In 1998 they announced their conclusion: The expansion of the universe is not slowing down. It is speeding up. The cause of this acceleration came to be known as “dark energy”—a name to be used as a placeholder until someone figures out what it actually is. A value for Sandage’s first number—the Hubble constant—soon followed. For several decades the number had been a source of contention among astronomers. Sandage himself had claimed H0 would be around 50 (the expansion rate expressed in kilometers per second per 3.26 million light-years), a value that would put the age of the universe at about 20 billion years. Other astronomers favored an H0 near 100, or an age of roughly 10 billion years. The discrepancy was embarrassing: even a brand-new science should be able to constrain a fundamental number within a factor of two. In 2001 the Hubble Space Telescope Key Project completed the first reliable measurement of the Hubble constant. In this case, the standard candles were Cepheid variables, stars that brighten and dim with a regularity that corresponds to their absolute luminosity (their 60-watt-ness, so to speak). The Key Project wound up essentially splitting the difference between the two earlier values: 72 ± 8. The next purely astronomical search for the constant was carried out by SH0ES (Supernovae, H0, for the Equation of State of Dark Energy), a team led by Adam G. Riess, who in 2011 shared the Nobel Prize in Physics for his role in the 1998 discovery of acceleration. This time the standard candles were both Cepheids and type Ia supernovae, and the latter included some of the most distant supernovae ever observed. The initial result, in 2005, was 73 ± 4, nearly identical to the Key Project’s but with a narrower margin of error. Since then, SH0ES has provided regular updates, all of them falling within the same range of ever narrowing error. The most recent, in 2019, was 74.03 ± 1.42. All these determinations of H0 involve the traditional approach of astronomy: starting in the here and now, the realm that cosmologists call the late universe, and peering farther and farther across space, which is to say (because the velocity of light is finite) further and further back in time, as far as they can see. In the past couple of decades, however, researchers have also begun using the opposite approach. They begin at a point as far away as they can see and work their way forward to the present. The cutoff point—the curtain between what we can and cannot see, between the “early” and the “late” universe—is the same CMB that the astronomers using the Bell Labs radio antenna first observed in the 1960s. The CMB is relic radiation from the period when the universe, at the young age of 379,000 years old, had cooled enough for hydrogen atoms to form, dissipating the dense fog of free protons and electrons and making enough room for photons of light to travel through the universe. Although the first Bell Labs image of the CMB was a smooth expanse, theorists assumed that at a higher resolution, the background radiation would reveal variations in temperature representing the seeds of density that would evolve into the structure of the universe as we know it—galaxies, clusters of galaxies and superclusters of galaxies. In 1992 the first space probe of the CMB, the Cosmic Background Explorer, found those signature variations; in 2003 a follow-up space probe, the Wilkinson Microwave Anisotropy Probe (WMAP), provided far higher resolution—high enough that physicists could identify the size of primitive sound waves made by primitive matter. As you might expect from sound waves that have been traveling at nearly the speed of light for 379,000 years, the “spots” in the CMB share a common radius of about 379,000 light-years. And because those spots grew into the universe we study today, cosmologists can use that initial size as a “standard ruler” with which to measure the growth and expansion of the large-scale structure to the present day. Those measures, in turn, reveal the rate of the expansion—the Hubble constant. The first measurement of H0 from WMAP, in 2003, was 72 ± 5. Perfect. The number exactly matched the Key Project’s result, with the additional benefit of a narrower error range. Further results from WMAP were slightly lower: 73 in 2007, 72 in 2009, 70 in 2011. No problem, though: the error for the SH0ES and WMAP measurements still overlapped in the 72-to-73 range. By 2013, however, the two margins were barely kissing. The most recent result from SH0ES at that time showed a Hubble constant of 74 ± 2, and WMAP’s final result showed a Hubble constant of 70 ± 2. Even so, not to worry. The two methods could agree on 72. Surely one method’s results would begin to trend toward the other’s as methodology and technology improved—perhaps as soon as the first data were released from the Planck space observatory, the European Space Agency’s successor to WMAP. That release came in 2014: 67.4 ± 1.4. The error ranges no longer overlapped—not even close. And subsequent data released from Planck have proved just as unyielding as SH0ES’s. The Planck value for the Hubble constant has stayed at 67, and the margin of error shrank to one and then, in 2018, a fraction of one. “Tension” is the scientific term of art for such a situation, as in the title of a conference at the Kavli Institute for Theoretical Physics (KITP) in Santa Barbara, Calif., last summer: “Tensions between the Early and the Late Universe.” The first speaker was Riess, and at the end of his talk he turned to another Nobel laureate in the auditorium, David Gross, a particle physicist and a former director of KITP, and asked him what he thought: Do we have a “tension,” or do we have a “problem”? Gross cautioned that such distinctions are “arbitrary.” Then he said, “But yeah, I think you could call it a problem.” Twenty minutes later, at the close of the Q and A, he amended his assessment. In particle physics, he said, “we wouldn’t call it a tension or a problem but rather a crisis.” “Okay,” Riess said, wrapping up the discussion. “Then we’re in crisis, everybody.” Unlike a tension, which requires a resolution, or a problem, which requires a solution, a crisis requires something more—a wholesale rethink. But of what? The investigators of the Hubble constant see three possibilities. One is that something is wrong in the research into the late universe. A cosmic “distance ladder” stretching farther and farther across the universe is only as sturdy as its rungs—the standard candles. As in any scientific observation, systematic errors are part of the equation. This possibility roiled the KITP conference. A group led by Wendy L. Freedman, an astrophysicist now at the University of Chicago who had been a principal investigator on the Key Project, dropped a paper in the middle of the conference that announced a contrarian result. By using yet another kind of standard candle—stars called red giants that, on the verge of extinction, undergo a “helium flash” that reliably indicates their luminosity—Freedman and her colleagues had arrived at a value that, as their paper said, “sits midway in the range defined by the current Hubble tension”: 69.8 ± 0.8—a result that offers no reassuring margin-of-error overlap with that from either SH0ES or Planck. The timing of the paper seemed provocative to at least some of the other late universe researchers in attendance. The SH0ES team in particular had little opportunity to digest the data (which the scientists tried to do over dinner that evening), let alone figure out how to respond. A mere three weeks later, though, they posted a response paper. The method that Freedman’s team used “is a promising standard candle for measuring extragalactic distances,” the authors began, diplomatically, before eviscerating the systematic errors they believed affected the team’s results. Riess and his colleagues’ preferred interpretation of the red giant data restored the Hubble constant to a value well within its previous confines: 72.4 ± 1.9. Freedman vehemently disagrees with that interpretation: “It’s wrong! It’s completely wrong!” she says. “They have misunderstood the method, although we have explained it to them at several meetings.” (In early October 2019, at yet another “tension” meeting, the dispute took a personal turn when Barry Madore—one of Freedman’s collaborators, as well as her spouse—showed a slide that depicted Riess’s head in a guillotine. The image was part of a science-related chopping-block metaphor, and Madore later said that including Riess’s head was a joke. But Riess was in the audience; suffice to say that the next coffee break included, at the insistence of many of the attendees, a discussion about professional codes of conduct.) Such squabbles cannot help but leave particle physicists figuring that, yes, the problem lies with the astronomers and the errors involving the distance ladder method. But CMB observations and the cosmic ruler must come with their own potential for systematic errors, right? In principle, yes. But few (if any) astronomers think the problem lies with the Planck observatory, which physicists believe to have reached the precision threshold for space observations of the CMB. In other words, Planck’s measurements of the CMB are probably as good as they are ever going to get. “The data are spectacular,” says Nicholas Suntzeff, a Texas A&M astronomer who has collaborated with both Freedman and Riess, though not on the Hubble constant. “And independent observations” of the CMB—at the South Pole Telescope and the Atacama Large Millimeter Array—“show there are no errors.” If the source of the Hubble tension is not in the observations of either the late universe or the early universe, then cosmologists have little choice but to pursue option three: “new physics.” For nearly a century now scientists have been talking about new physics—forces or phenomena that would fall outside our current knowledge of the universe. A decade after Albert Einstein introduced his general theory of relativity in 1915, the advent of quantum mechanics compromised its completeness. The universe of the very large (the one operating according to the rules of general relativity) proved to be mathematically incompatible with the universe of the very small (the one operating according to the rules of quantum mechanics). For a while physicists could disregard the problem, as the two realms did not intersect on a practical level. But then came the discovery of the CMB, validating the idea that the universe of the very large actually emerged from the universe of the very small—that the large-scale galaxies and clusters we study with the help of general relativity grew out of quantum fluctuations. The Hubble tension arises directly out of an attempt to match those two types of physics. The quantum fluctuations in the CMB predict that the universe will mature with one value of the Hubble constant, whereas the general relativistic observations being made today are revealing another value. Riess likens the discrepancy to a person’s growth. “You’ve got a child, and you can measure their height very precisely when they’re two years old,” he says. “And you can then use your understanding of how people grow, like a growth chart, to predict their final height at the end.” Ideally the prediction and measurement would agree. “In this case,” he says, “they don’t.” Then again, he adds, “We don’t have a growth chart for how universes usually grow.” And so cosmologists have begun entertaining the radical—yet not altogether unpalatable—possibility that the standard cosmological model is not as complete as they have assumed it to be. One possible factor affecting our understanding of the universe’s growth is an uncertainty about the particle census of the universe. Most scientists today are old enough to remember another imbalance between observation and theory: the “solar neutrino problem,” a decades-long dispute about electron neutrinos from the sun. Theorists predicted one amount; neutrino detectors indicated another. Physicists suspected systematic errors in the observations. Astronomers questioned the completeness of the theory. As with the Hubble constant tension, neither side budged—until the end of the millennium, when researchers discovered that neutrinos, unexpectedly, have mass; theorists adjusted the Standard Model of particle physics accordingly. A similar adjustment now—for instance, a new variety of neutrino in the early universe—might alter the distribution of mass and energy just enough to account for the differences in measurement. Another possible explanation is that the influence of dark energy changes over time—a reasonable alternative, considering that cosmologists do not know how dark energy works, let alone what it is. “There is a small correction somewhere needed to bring the numbers into agreement,” Suntzeff says. “That is new physics, and that is what excites cosmologists—a kink in the wall of the Standard Model, something new to work on.” Everybody knows what they have to do next. Observers will await data from Gaia, a European Space Agency observatory that promises, in the next couple of years, unprecedented precision in the measurement of distances to more than a billion stars in our galaxy. If those measurements do not match the values that astronomers have been using as the first rung in the distance ladder, then maybe the problem will have been systematic errors after all. Theorists, meanwhile, will continue to churn out alternative interpretations of the universe. So far, though, they have not found one that withstands community scrutiny. And there, barring any breakthrough, the tension—problem, crisis—will have to reside for now: in a quasi-unscientific universe harboring a predicted Hubble constant of 67 that belies the observation of 74. The standard cosmological model remains one of the great scientific triumphs of the age. In half a century cosmology has matured from speculation to (near) certainty. It might not be as complete as cosmologists believed it to be even a year ago, yet it remains a textbook example of how science works at its best: it raises questions, it provides answers and it hints at mystery.
https://www.scientificamerican.com/article/how-a-dispute-over-a-single-number-became-a-cosmological-crisis/
# Big Crunch The Big Crunch is a hypothetical scenario for the ultimate fate of the universe, in which the expansion of the universe eventually reverses and the universe recollapses, ultimately causing the cosmic scale factor to reach zero, an event potentially followed by a reformation of the universe starting with another Big Bang. The vast majority of evidence indicates that this hypothesis is not correct. Instead, astronomical observations show that the expansion of the universe is accelerating, rather than being slowed by gravity, suggesting that the universe is far more likely to end in heat death. ## Overview The Big Crunch scenario hypothesized that the density of matter throughout the universe is sufficiently high that gravitational attraction will overcome the expansion which began with the Big Bang. The FLRW cosmology can predict whether the expansion will eventually stop based on the average energy density, Hubble parameter, and cosmological constant. If the metric expansion stopped, then contraction will inevitably follow, accelerating as time passes and finishing the universe in a kind of gravitational collapse. A more specific theory called "Big Bounce" proposes that the universe could collapse to the state where it began and then initiate another Big Bang, so in this way the universe would last forever, but would pass through phases of expansion (Big Bang) and contraction (Big Crunch). Experimental evidence in the late 1990s and early 2000s (namely the observation of distant supernovae as standard candles, and the well-resolved mapping of the cosmic microwave background) led to the conclusion that the expansion of the universe is not being slowed by gravity but is instead accelerating. The 2011 Nobel Prize in Physics was awarded to researchers who contributed to making this discovery. Physicist Roger Penrose advanced a general relativity-based theory called the conformal cyclic cosmology in which the universe expands until all the matter decays and is turned to light. Since nothing in the universe would have any time or distance scale associated with it, it becomes identical with the Big Bang (resulting in a type of Big Crunch which becomes the next Big Bang, thus starting the next cycle). Penrose and Gurzadyan suggested that signatures of conformal cyclic cosmology could potentially be found in the cosmic microwave background; as of 2020, these have not been detected. ## Empirical scenarios from physical theories If a form of quintessence driven by a scalar field evolving down a monotonically decreasing potential that passes sufficiently below zero is the (main) explanation of dark energy and current data (in particular observational constraints on dark energy) is true as well, the accelerating expansion of the Universe would inverse to contraction within the cosmic near-future of the next 100 million years. According to an Andrei-Ijjas-Steinhardt study, the scenario fits "naturally with cyclic cosmologies and recent conjectures about quantum gravity". The study suggests that the slow contraction phase would "endure for a period of order 1 billion y before the universe transitions to a new phase of expansion". ## Effects Paul Davies considered a scenario in which the Big Crunch happens about 100 billion years from the present. In his model, the contracting universe would evolve roughly like the expanding phase in reverse. First, galaxy clusters, and then galaxies, would merge, and the temperature of the cosmic microwave background (CMB) would begin to rise as CMB photons get blueshifted. Stars would eventually become so close together that they begin to collide with each other. Once the CMB becomes hotter than M-type stars (about 500,000 years before the Big Crunch in Davies' model), they would no longer be able to radiate away their heat and would cook themselves until they evaporate; this continues for successively hotter stars until O-type stars boil away about 100,000 years before the Big Crunch. In the last minutes, the temperature of the universe would be so great that atoms and atomic nuclei would break up and get sucked up into already coalescing black holes. At the time of the Big Crunch, all the matter in the universe would be crushed into an infinitely hot, infinitely dense singularity similar to the Big Bang. The Big Crunch may be followed by another Big Bang, creating a new universe.
https://en.wikipedia.org/wiki/Big_Crunch
In the 1st part on this topic the essential attributes of dark matter had been described. Dark matter was necessary in order to hold the basic fabric of galaxies together; otherwise, billions of stars at the edges of the galaxies would experience weaker gravitational pull and could even fall away from the galactic orbits. So, dark matters were invoked to be present all over the galactic system. In this part, the role of the dark energy will be considered. Dark matter may keep the individual galactic system intact and maintain higher orbital speeds to outlying stars, but then what is giving the Universe impetus to expand? The ‘Standard Model’ of the cosmological system predicted that the Universe simply could not exist in a quiescent steady state – it has to be dynamic in nature, meaning it either has to expand or contract. Indeed, in 1929 Edwin Hubble made an astronomical observation and that had become incontrovertible showing that the Universe was actually expanding. That made Einstein to admit that his cosmological constant, Ʌ (lambda) introduced in the general theory of relativity with a particular value to force a steady state condition for the Universe was flawed. For the next 70 years, until 1998, cosmologists implicitly took Ʌ to be zero and the Universe was described as per Einstein’s field equations. Nobody thought of discarding the cosmological constant that Einstein had introduced, albeit mistakenly. Then in 1998, another even more astounding evidence was produced based on observation using Hubble telescope, when it was shown that light from very distant supernovae was fading away and showing red shifts indicating supernovae were receding and receding at faster rates further they were from the Earth. In other words, there was an accelerated expansion in the Universe. The Universe’s current expansion rate is known as the Hubble constant, H0 which is estimated to be approximately 73.5 km per second per megaparsec. A megaparsec is the distance of 3.26 million light years. As the speed of light is 3×108 m/s or 9.46×1012 km/year, 1 megaparsec then equals to 3.08×1019 km. A galaxy 1 megaparsec away (3.08×1019 km) would recede from Earth at 73.5 km/s; whereas another galaxy 10 times of 1 megaparsec from the Earth would recede at 10 times of 73.5km per sec = 735 km per sec. That was a shocking result and the cosmologists were taken completely by surprise. What is providing this gigantic Universe enough energy to expand and expand at an accelerated rate? Further observations had demonstrated that this accelerated expansion is in fact taking place in the vast extra-galactic spaces. This came to be known as the ‘metric expansion’. There was no evidence or verifiable evidence of expansion within the individual territories of galaxies. It may indeed be argued that if there were any expansion within a galactic system, then stars would move away from each other and even the planets revolving round the stars would recede. For example, Earth would recede from the Sun and that recessive path would look like a spiral trajectory and eventually Earth would secede completely from the Heliosphere! This would be a recipe for a total disaster for the Earth-bound lives like ours and luckily there was no such evidence of recession. Expansion of the Universe as per Standard Model Albert Einstein’s cosmological constant, Ʌ in the general theory of relativity came to the rescue of this paradox of cosmological expansion. Dark energy was invoked to solve this problem. Dark energy is perceived to be the intrinsic energy of the empty space or simply the vacuum energy. It may be pointed out that space is viewed in the general theory of relativity as the product of gravitational field. As there are limitless empty spaces in the cosmological scale, dark energy can also be limitless. Although the precise mechanism of generation of dark energy is unknown, but some of the essential characteristics may be drawn. Dark energy is repulsive in character. Thus, dark energy can be viewed as something that reacts with ordinary matter (baryonic matter) making up the celestial bodies, but in opposite direction to ordinary gravity. Some scientists speculate that dark energy may even be a form of a new type of force – the fifth force – which is as yet unknown. The known four forces are: electromagnetic force, weak nuclear force, strong nuclear force and the gravitational force and the properties of these forces are well known. If indeed the fifth force does come into play, it would offer a situation where gravity and anti-gravity may come to exist in the same Universe. It may be that the attractive gravity exists within the scale of galaxies, whereas repulsive gravity exists in the vast extra-galactic space! Taking material accounting of galaxies into consideration, it is estimated that on the basis of mass-energy composition, the Universe is only 4.5% of ordinary matter, 26.1% of dark matter and 69.4% of dark energy. However, this distribution of mass-energy composition in observable celestial bodies and unobservable black holes do not remain fixed or invariant. At the early part of the Universe’s formation, after about 380,000 years following the Big Bang (13.8 billion years ago), the distribution mass and energy was quite different. Ordinary matter was 12% and dark matter was 63% and there was no dark energy, as shown in the Table below. The situation is quite different now and this shows that the Universe is changing or one can say evolving. Universe’s mass-energy composition In the Universe, the amount of ordinary matter (baryonic matter) is fixed and as the Universe expands, the average density of ordinary matter in the Universe is continuously diminishing; as density is the amount of material divided by the volume. Similarly, the dark matter density of the Universe is also decreasing as Universe expands. But the dark energy density had been found to remain constant, no matter how much or how fast the Universe expands. It is due to fact that vacuum energy is constantly added (as space has intrinsic vacuum energy) to the pool of dark energy as Universe expands and hence the dark energy density remains constant. In the metric expansion, the space or more appropriately, spacetime fabric is created extra-galactic. Space is not something which is devoid of other things. Space is the gravitational field. Like electromagnetic field, gravitational field generating space is granular in character. The quantum of space is so incredibly small that we cannot sense them, similar to solid granular atoms we cannot feel. Space granules are literally trillions of times smaller than atoms. Space granules or space quanta are not within the space, space quanta are the space. A new branch of physics, called ‘loop quantum gravity’ shows how space quanta make up the space. When Universe expands, space is produced with spacetime quanta and the intrinsic dark energies increase. Although the evidence of accelerated expansion of the Universe was baffling, but it was not unexpected. The Universe had undergone very rapid expansion at the early phase of its existence, some 13.8 billion years ago, after that it slowed down for billions of years and then the expansion phase started about four or five billion years ago. When this expansion will stop or even reverse, nobody knows. But it is definite that the Universe as a whole is not static, it is very much dynamic, vibrant and evolving. If anybody says that the Earth, Sun and Moon and even the whole Universe were created by some unknown Creator and then he left the whole thing in a quiescent state, then there is every reason to question such unfounded claims and discard them as totally baseless.
https://provakar.net/tag/dark-energy/
Planck results: first stars born later than expected News about Dark Matter, neutrinos, first stars and the cosmological model: the Planck collaboration, with a leading participation of the Institut d'Astrophysique Spatiale, has just published nearly twenty articles revealing many important results that will allow to better understand major chapters in the book of the Universe. The history of the Universe is truly a cosmic saga that began some 13.8 billion years ago which scientists have been deciphering in increasing precision for decades. One of their principal sources of information is the Cosmic Microwave Background (CMB), a relic light dating back to times when the Universe was very hot and dense, roughly 380 000 years after the Big Bang. Information collected by the Planck satellite after four years of observation of the CMB, including the finest snapshot of that relic radiation, has been therefore awaited with great anticipation by the scientific community. For the first time, its analysis provides now scientists with not only a static, but also with a truly dynamical picture of the young Universe that enables the exploration of all the cogs and springs of the cosmological model. The first stars: more recent than expected. Among the new lessons deduced from the Planck data, cosmologists have determined the current expansion rate of the Universe. It has led them to estimate the present age of the Universe at 13.77 billion years. More surprisingly, their refined determination of the epoch of the birth of the first stars places it at about 550 million years after the Big Bang, which is much later that what scientists thought before. Finally, thanks to the Planck high precision data, astrophysicists were able to pin down the precise content of the Universe, revealing that 4.9% of the energy budget is made of ordinary matter, 25.9% of Dark Matter of persistently elusive nature, and 69.2% of Dark Energy, yet another form of energy, distinct from but even more mysterious than Dark Matter. But the most important novelty has been brought by the information from the polarisation of the CMB light. Thanks to it, scientists are now able to test a number of assumptions they make about the Universe, pertaining to both the physical laws that govern its evolution and to the properties of its constituents (neutrinos and Dark Matter, for instance). Today, this new data provide world-wide scientists with particularly solid foundations for the exploration of the most remote epoch of the cosmic history, ever closer to the Big Bang. Understanding the cycle of matter in the interstellar medium Another major domain in which Planck observations will allow the scientific community to deepen its knowledge concerns the magnetic field of our Galaxy. This is very important because the magnetic field is a key player in the life-cycle of interstellar matter. The discovery of the magnetism of our Galaxy is closely related to that of cosmic rays. Without a magnetic field, these particles, accelerated by supernovae up to velocities toying with the speed of light, would quickly leave the interstellar medium. However, the force due to the presence of the magnetic field confines them to our Galaxy. In addition, the magnetic field itself is tightly bound to interstellar matter. Furthermore, in the presence of magnetic fields, collisions and coupling to the radiation field of stars tend to align interstellar dust grains, and that alignment is at the origin of the polarisation of dust emitted radiation. For the first time, the Planck satellite has measured this polarisation on the whole sky. It is well known that interstellar matter, magnetic fields and cosmic rays constitute a dynamical mixture: they are tightly coupled to each other, and we cannot understand one of them fully without taking into account the two others. The importance of the magnetic field in this ménage à trois has been known for some time, but observations available to scientists were still rather scarce until now. Current results of the Planck mission will help in this context since two unprecedented maps of the polarised sky have been released. Synchrotron polarisation, similarly to dust polarisation, traces the magnetic field lines. Thus, Planck data reveal the global structure of the Galactic magnetic field in exquisite details, never seen before. Contacts at the IAS : Jean-Loup Puget, François Boulanger, Marc-Antoine Miville-Deschênes, Jonathan Aumont Additional information on the most recent Planck results:
https://www.ias.universite-paris-saclay.fr/en/content/planck-results-first-stars-born-later-expected
Presentation is loading. Please wait. Published byJocelyn McBride Modified over 7 years ago 1 Cosmology I & II Expanding universe Hot early universe Nucleosynthesis Baryogenesis Cosmic microwave background (CMB) Structure formation Dark matter, dark energy Cosmic inflation 2 UNITS, NOTATION c = ħ= k B = 1 Energy = mass = GeV Time = length = 1/GeV Planck mass M P = 1.22 10 19 GeV Newton’s constant G = 1/ M P 1 eV = 11000 K 1 s ~ 1/MeV 2 Metric signature = (1,-1,-1,-1) 3 Quantities, observables Hubble rate = expansion rate of the universe = H Energy density of particle species x = x = E/V Number density n x = N/V Relative He abundance Y = 4 He/(H+ 4 He) Baryon number of the universe (n B -n B )/n Scattering cross section ~ [1/energy 2 ], (decay) rate ~ [energy] ~ n ¯ 4 (cont) CMB temperature T( ) = T 0 + T( ) (”CMB power spectrum”) Galaxy-galaxy correlators (”Large scale structure” = LSS) Distant supernova luminosities 5 The starting point expansion of the universe is very slow (changes adiabatic): H << scattering rates Thermal equilibrium (+ some deviations from: this is where the interesting physics lies) Need: statistical physics, particle physics, some general relativity 6 History of cosmology General theory of relativity 1916 –First mathematical theory of the universe –Applied by Einstein in 1917 –Problem: thought that universe = Milky Way → overdense universe → must collapse → to recover static universe must introduce cosmological constant (did not work) 7 Theory develops … Willem de Sitter 1917 –Solution to Einstein equations, assuming empty space: (exponential) expansion (but can be expressed in stationary coordinates) Alexander Friedmann 1922 –Solution to Einstein eqs with matter: no static solution –Universe either expanding or collapsing 8 Observations Henrietta Leavitt 1912 –Cepheids: luminosity and period related → standard candles Hubble 1920s –1923: Andromeda nebula is a galaxy (Mount Wilson 100” telescope sees cepheids) –1929: redshifts of 24 galaxies with independent distance estimates → the Hubble law v = Hd law v = Hd 9 Georges Lemaitre 1927: ”primeaval atom” –Cold beginning, crumbling supernucleus (like radioactivity) George Gamow: 1946-1948 –Hot early universe (nuclear physics ~ the Sun) –Alpher, Gamow, Herman 1948: relic photons with a temperature today of 5 K –Idea was all but forgotten in the 50’s 10 Demise of the steady state Fred Hoyle 1950s –”steady state theory”: the universe is infinite and looks the same everywhere –New matter created out of vacuum → expansion (added a source term into Einstein eqs.) Cambridge 3C galaxy survey 1959 –Radiogalaxies do not follow the distribution predicted by steady state theory 11 Rediscovery of Big Bang Penzias & Wilson 1965 Bell labs –Testing former Echo 6 meter radioantenna to use it for radioastronomy (1964) –3 K noise that could not be accounted for –Dicke & Peebles in Princeton heard about the result → theoretical explanation: redshifted radiation from the time of matter-radiation decoupling (”recombination”) = CMB –Thermal equilibrium → black body spectrum –Isotropic, homogenous radiation: however, universe has structure → CMB must have spatial temperature variations of order 10 -5 K 12 Precision cosmology COBE satellite 1992 –Launch 1989, results in 1992results in 1992 –Scanned the microwave sky with 2 horns and compared the temperature differences –Found temp variations with amplitude 10 -5 K, resolution < 7 OFound Balloon experiments end of 90’sBalloon –Maxima, Boomerang: first acoustic peak discoveredBoomerang LSS surveys –2dF etc 90’s; ongoing: Sloan Digital Sky Survey (SDSS)2dF 13 WMAP 2003WMAP –High precision spectrum of temperature fluctuations fluctuations –Determination of all essential cosmological parameters with an accuracy of few %Determination Big bang nucleosynthesis 1980’s → –H, He, Li abundances (N, )abundances Planck Surveyor Mission 2007 (Finland participates) 14 Surprises/problems Dark matter (easy) Dark energy (~ cosmological constant, very hard) Cosmic inflation (great, but how?) Baryogenesis (how?- Standard Model not enough) Similar presentations © 2022 SlidePlayer.com Inc. All rights reserved.
https://slideplayer.com/slide/6151698/
Has a Mystery Energy Field Hidden Since the Big Bang Been Activated? Was a dormant energy field lurking since the big bang that is now active causing the expansion of the universe to accelerate? In the late 1990s, observations of supernovae revealed that the universe has started expanding faster and faster over the past few billion years. Christophe Ringeval of the Catholic University of Louvain (UCL) in Belgium and his colleagues theorize that a quintessence field could be linked to a phase in the universe's history called inflation. Quintessence is a hypothetical form of dark energy postulated as an explanation of observations of an accelerating universe.During this phase, fractions of a second after the big bang, space-time expanded exponentially. Inflation is thought to have occurred because of another scalar field that existed at the time. But what if another, much weaker quintessence field was also around during inflation? According to the UCL team's models, inflation would have induced quantum fluctuations in the quintessence field. When the universe began its more sedate expansion after inflation ended, the field and its fluctuations would have been frozen into the fabric of space-time, so that the energy density of the field did not change with time. "This field would have had no impact on the early universe, which would have been dominated by matter and radiation. But eventually, as the universe grew, its expansion rate slowed down and the influence of matter and radiation diminished, the relative strength of the quintessence field increased, causing the expansion of space-time to accelerate," says Ringeval. "The idea of mixing inflation with the dark energy problem is especially attractive," says Jérôme Martin of the Paris Institute of Astrophysics in France. But he adds that the "scenario needs additional calculations to be confirmed". The first test of the idea could come as early as next year. The European Space Agency's Planck satellite is looking for signs of gravitational waves – fluctuations in the fabric of space-time caused by inflation. These would be imprinted in the cosmic microwave background (CMB), the radiation left over from the big bang, which the satellite will measure. Ringeval and colleagues calculated what the strength of a quintessence field should be today, and worked backwards to estimate when inflation should have occurred. They found that it must have happened when the energy of the universe was in the teraelectronvolts (TeV) range. That would produce gravitational waves too weak to be detected by Planck, so if it does find evidence of them "our model will be destroyed", says Ringeval.
https://dailygalaxy.com/2010/08/has-an-energy-field-hidden-since-the-big-bang-activated/
Can we form a time from c, and G?And a length, mass and temperature? t=CcG t p=(Gc5 )1 /2 =51044 s l p=c t p=(Gc3 )1/2 =1.61035 m M p=( cG )1 /2 =2108 kgT p=( c5 GkB2 ) 1/2 =1.41032 K CERNs outreach postersStandard Model of Cosmology At the instant of the Big Bang, all the matter in the Universe was condensed into a single point*. Other than that, we know nothing about what went on in the first instants of the Universe's existence. But by looking far out into today's Universe and peering deeply into the world of fundamental particles, scientists have managed to piece together the evolution of the universe from the inconceivably short time of just 10-43 seconds after the Big Bang. The father of the Big Bang was the Belgian Jesuit priest Lematre, who found in 1927 a solution in which the Universe starts out with a Big Bang, as it was later called *The Big Bang theory is a cosmological model, developed in the 20th century, based on the 1916 theory of General Relativity applied to the Universe, and motivated by the observation of the expansion of the Universe by Hubble The very beginning ... The spectral lines of a moving star or galaxy are Doppler-shifted by =obslabRedshift is z=/lab and thus the velocity is v=cz Hubble used the new Mt. Wilson telescope to observe variable stars in the nearby nebula Andromeda. He realized that the fuzzy patches called nebulae were actually distant galaxies, outside of our own Milky Way. This implies a uniform and homogeneous expansion of the Universe with time! Discovery of the expanding universe From his data, Hubble deduced the relation:v = H0d = cz v = velocity from spectral line measurement d = distance to object H0 = Hubble constant in km s-1 Mpc-1 z = the redshift Andromeda nebula At that point in time, things were happening very fast. When the Universe was 10-43 seconds old Nature's forces were indistinguishable*. Particles of matter and antimatter (the white circles in the picture) existed in equal portions. They were constantly annihilating to produce radiation, represented by red spirals, and being recreated from that radiation. Matter was compressed so densely that even light could not travel far and the Universe was opaque. Just before this time physicists think that Universe expanded at a dizzying rate. This period of so-called Cosmic Inflation** is necessary in the Big Bang theory to explain the large scale uniformity of the Universe today. *This is the (unproven) hypothesis of the existence of a Grand Unified Theory of particle physics. ** This (unproven) theory of inflationis due to Alan Guth (1981). Inflation During the next phase of the Universe's existence, up to around 10 -34 seconds after the Big Bang, the strong force that binds particles called quarks together into protons and neutrons became distinct from the electromagnetic and weak forces which remained indistinguishable. Protons and neutrons did not start to form, however, because any groupings of quarks were rapidly broken up by the high-energy radiation that still pervaded the Universe. Matter was a sort of high-density cosmic soup * called the quark-gluon plasma . The carriers of the weak force, W and Z particles, were as abundant as photons and they behaved in the same way.To understand fully this phase of the Universe's existence, physicists try to recreate the quark-gluon plasma in the laboratory, for instance at Brookhaven's Relativistic Heavy Ion Collider (RHIC) * The strong interaction between quarks & gluons has some very curious properties: - at short distances quarks & gluons do not interact; this is the famous asymptotic freedom- quarks and gluons are confined (locked up) in protons, neutrons and other nuclear particles Quark-gluon plasma Collision @ RHIC Also around this time a tiny excess of matter over antimatter , just one matter particle surviving for every thousand million particles to annihilate with antimatter, began to develop. It is these survivors that make up our Universe today*. The precise mechanism that has allowed some matter to survive is poorly measured up to now, but it is another phenomenon that will be studied in depth at the LHC and elsewhere. * This profound idea of baryogenesis is due to Andrej Sakharov Baryogenesis Between 10 -34 seconds to 10 -10 seconds the electromagnetic and weak forces separated*. There was no longer enough energy to produce W and Z particles and those that had already been made decayed away. The energy of the radiation had also fallen sufficiently to allow protons (red) and neutrons (green) to form ** as well as short-lived particles, called mesons, made of a quark and an antiquark (blue).Antimatter started to disappear because when quarks annihilated with antiquarks there was no longer enough energy in the radiation to recreate them.Particle physics experiments have begun to probe back in time as far as this by crashing particles together with enough energy to recreate the conditions of the early Universe at laboratories like CERN. *According to the unified electroweak theory, formulated by Glashow, Weinberg & Salam, and proven consistent by t Hooft & Veltman. **The proof of this quark confinement in QCD is still lacking (prize: M 1,-) Forces separate Up to about 10 -5 seconds proton and neutron building continued. The remaining antimatter, in the form of positrons, disappeared as the radiation energy density dropped below that necessary to create electron-positron pairs. With no antimatter left in the Universe other than a few particles locked up inside mesons, all that is left is the one-in-a-thousand-million matter particles* resulting from Nature's apparent preference for matter. *The Standard Model of particle physics fails to explain this: it does not have enough breaking of a mirror symmetry called CP-invariance. Antimatter disappears After that, things really started to slow down. Up to around three minutes * protons and neutrons started to combine to produce light atomic nuclei . Only deuterium (heavy hydrogen), helium and a tiny amount of lithium were made. The Universe was like a giant thermonuclear reactor until, at around three minutes, the reactions stopped leaving a Universe composed of hydrogen, deuterium, helium, and a little lithium**. Even today, the Universe is made up of about 75% hydrogen and 25% helium, with just traces of heavier elements cooked up in stars to make everything else that we consider to be "ordinary" matter. *cf. S. Weinberg: The First 3 Minutes. **This primordial or Big Bang Nucleosynthesis provides very strong support for the Big Bang model. Nucleosynthesis Agreement of abundancesover 10 orders of magnitude Major success of Big Bang Observational concordance = nB/ng = (41)x10-10 CMB: ng = 411 cm-3 Conclusion of BBN: - most matter is not nucleons Abundances of the light elements After about 5 minutes, the elemental composition of the Universe remains unchanged until the first stars form several billion years later Time evolution of BBN Most protons remain free Most neutrons in 4He nuclei Rest of neutrons decay away During the next 380,000 years the Universe became transparent as photons no longer interacted * as soon as they were made. Electrons became captured by the hydrogen, deuterium, helium and lithium nuclei to form the first atoms. The CMB anisotropies contain important information about cosmological parameters and will be measured with even higher precision by Planck, launched in 2009. *These photons cooled as the Universe expanded, resulting in todays Cosmic Microwave Background of 2.7 Kelvin, a nearly-perfect black-body spectrum. The CMB was discovered by accident in 1965 by Penzias & Wilson. Fluctuations in the CMB were detected by the COBE satellite and studied in detail by the WMAP & Planck satellite missions. Atom formation Bell Labs COBE1992 Wilson Penzias(+Robert Dicke) Discovered in 1965 as excess noise (Noble Prize in 1978) George Gamow (1904-1968) Gamow (1946), Alpher & Herman (1949) predict 5 K relic radiation from Big Bang Cosmic Microwave Background First observation of the CMB The microwave light captured in this picture is from 379.000 years after the Big Bang, over 13 billion years ago: the equivalent of taking a picture of an 80-year-old person on the day of their birth. Isotropy of the universe Planck results *Then, the dark ages start, which only end when clustering of matter leads to the re-ionization period and the first generation of stars and, later, galaxies. This period will be studied in depth by LOFAR, a digital radio telescope, under construction in Drenthe, designed to detect the 21 cm line of hydrogen red-shifted to radio frequencies.
https://pdfslide.net/documents/11-cosmology.html
A successor to the standard hot big-bang cosmology is emerging. It greatly extends the highly successful hot big-bang model. A key element of the New Standard Cosmology is dark energy, the causative agent for accelerated expansion. Dark energy is just possibly the most important problem in all of physics. The only laboratory up to the task of studying dark energy is the Universe itself. 1 The New Cosmology Cosmology is enjoying the most exciting period of discovery ever. Over the past three years a new, improved standard cosmology has been emerging. It incorporates the highly successful standard hot big-bang cosmology and extends our understanding of the Universe to times as early as sec, when the largest structures in the Universe were still subatomic quantum fluctuations. This New Standard Cosmology is characterized by - Flat, accelerating Universe - Early period of rapid expansion (inflation) - Density inhomogeneities produced from quantum fluctuations during inflation - Composition: 2/3rds dark energy; 1/3rd dark matter; 1/200th bright stars - Matter content: % cold dark matter; % baryons; % neutrinos The New Standard Cosmology is certainly not as well established as the standard hot big bang. However, the evidence is mounting. With the recent DASI observations, the evidence for flatness is now quite firm : . As I discuss below, the evidence for accelerated expansion is also very strong. The existence of acoustic peaks in the CMB power spectrum and the evidence for a nearly scale-invariant spectrum of primeval density perturbations () is exactly what inflation predicts (along with a flat Universe). CMB anisotropy measurements (by MAP, Planck and a host of other experiments) as well as precision measurements of large-scale structure coming soon from the SDSS and 2dF will test inflation much more stringently. The striking agreement of the BBN determination of the baryon density (from D/H measurements [3, 4]) with recent CMB anisotropy measurements make a strong case for a small baryon density compared to the total matter density . The many successes of the cold dark matter scenario – from the sequence of structure formation (galaxies first, clusters of galaxies and larger objects later) and the structure of the intergalactic medium to its ability to reproduce the power spectrum of inhomogeneity measured today – makes it clear that CDM holds much, if not all, of the truth in describing the formation of structure in the Universe. Cosmological measurements and observations over the next decade or more will test (and probably refine) the New Standard Cosmology . If we are fortunate, they will also help us to make sense of it all. The most pressing item to make sense of is dark energy. Its deep connections to fundamental physics – a new form of energy with repulsive gravity and possible implications for the divergences of quantum theory and supersymmetry breaking – put it very high on the list of outstanding problems in particle physics. 2 Dark Energy Dark energy is my term for the causative agent of the current epoch of accelerated expansion. According to the second Friedmann equation, |(1)| this stuff must have negative pressure, with magnitude comparable to its energy density, in order to produce accelerated expansion [recall ; is the cosmic scale factor]. Further, since this mysterious stuff does not show its presence in galaxies and clusters of galaxies, it must be relatively smoothly distributed. That being said, dark energy has the following defining properties: (1) it emits no light; (2) it has large, negative pressure, ; and (3) it is approximately homogeneous (more precisely, does not cluster significantly with matter on scales at least as large as clusters of galaxies). Because its pressure is comparable in magnitude to its energy density, it is more “energy-like” than “matter-like” (matter being characterized by ). Dark energy is qualitatively very different from dark matter. It has been said that the sum total of progress in understanding the acceleration of the Universe is naming the causative agent. While not too far from the truth, there has been progress which I summarize below. 3 Dark Energy: Seven Lessons 3.1 Two lines of evidence for an accelerating Universe Two lines of evidence point to an accelerating Universe. The first is the direct evidence based upon measurements of type Ia supernovae carried out by two groups, the Supernova Cosmology Project and the High- Supernova Team . These two teams used different analysis techniques and different samples of high- supernovae and came to the same conclusion: the Universe is speeding up, not slowing down. The recent discovery of a supernovae at bolsters the case significantly and provides the first evidence for an early epoch of decelerated expansion . SN 1997ff falls right on the accelerating Universe curve on the magnitude – redshift diagram, and is a magnitude brighter than expected in a dusty open Universe or an open Universe in which type Ia supernovae are systematically fainter at high-. The second, independent line of evidence for the accelerating Universe comes from measurements of the composition of the Universe, which point to a missing energy component with negative pressure. The argument goes like this. CMB anisotropy measurements indicate that the Universe is flat, . In a flat Universe, the matter density and energy density must sum to the critical density. However, matter only contributes about 1/3rd of the critical density, . (This is based upon measurements of CMB anisotropy, of bulk flows, and of the baryonic fraction in clusters.) Thus, two thirds of the critical density is missing! In order to have escaped detection this missing energy must be smoothly distributed. In order not to interfere with the formation of structure (by inhibiting the growth of density perturbations) the energy density in this component must change more slowly than matter (so that it was subdominant in the past). For example, if the missing 2/3rds of critical density were smoothly distributed matter (), then linear density perturbations would grow as rather than as . The shortfall in growth since last scattering () would be a factor of 30, far too little growth to produce the structure seen today. The pressure associated with the missing energy component determines how it evolves: |(2)| where is the ratio of the pressure of the missing energy component to its energy density (here assumed to be constant). Note, the more negative , the faster the ratio of missing energy to matter goes to zero in the past. In order to grow the structure observed today from the density perturbations indicated by CMB anisotropy measurements, must be more negative than about . For a flat Universe the deceleration parameter today is Therefore, knowing implies and accelerated expansion. 3.2 Gravity can be repulsive in Einstein’s theory, but … In Newton’s theory mass is the source of the gravitational field and gravity is always attractive. In general relativity, both energy and pressure source the gravitational field. This fact is reflected in Eq. 1. Sufficiently large negative pressure leads to repulsive gravity. Thus, accelerated expansion can be accommodated within Einstein’s theory. Of course, that does not preclude that the ultimate explanation for accelerated expansion lies in a fundamental modification of Einstein’s theory. Repulsive gravity is a stunning new feature of general relativity. It leads to a prediction every bit as revolutionary as black holes – the accelerating Universe. If the explanation for the accelerating Universe fits within the Einsteinian framework, it will be an important new triumph for general relativity. 3.3 The biggest embarrassment in theoretical physics Einstein introduced the cosmological constant to balance the attractive gravity of matter. He quickly discarded the cosmological constant after the discovery of the expansion of the Universe. Whether or not Einstein appreciated that his theory predicted the possibility of repulsive gravity is unclear. The advent of quantum field theory made consideration of the cosmological constant obligatory not optional: The only possible covariant form for the energy of the (quantum) vacuum, is mathematically equivalent to the cosmological constant. It takes the form for a perfect fluid with energy density and isotropic pressure (i.e., ) and is precisely spatially uniform. Vacuum energy is almost the perfect candidate for dark energy. Here is the rub: the contributions of well-understood physics (say up to the GeV scale) to the quantum-vacuum energy add up to times the present critical density. (Put another way, if this were so, the Hubble time would be sec, and the associated event horizon would be 3 cm!) This is the well known cosmological-constant problem [12, 13]. While string theory currently offers the best hope for a theory of everything, it has shed precious little light on the problem, other than to speak to the importance of the problem. Thomas has suggested that using the holographic principle to count the available number of states in our Hubble volume leads to an upper bound on the vacuum energy that is comparable to the energy density in matter + radiation . While this reduces the magnitude of the cosmological-constant problem very significantly, it does not solve the dark energy problem: a vacuum energy that is always comparable to the matter + radiation energy density would strongly suppress the growth of structure. The deSitter space associated with the accelerating Universe poses serious problems for the formulation of string theory . Banks and Dine argue that all explanations for dark energy suggested thus far are incompatible with perturbative string theory . At the very least there is high tension between accelerated expansion and string theory. The cosmological constant problem leads to a fork in the dark-energy road: one path is to wait for theorists to get the “right answer” (i.e., ); the other path is to assume that even quantum nothingness weighs nothing and something else with negative pressure must be causing the Universe to speed up. Of course, theorists follow the advice of Yogi Berra: where you see a fork in the road, take it. 3.4 Parameterizing dark energy: for now, it’s Theorists have been very busy suggesting all kinds of interesting possibilities for the dark energy: networks of topological defects, rolling or spinning scalar fields (quintessence and spintessence), influence of “the bulk”, and the breakdown of the Friedmann equations [13, 18]. An intriguing recent paper suggests dark matter and dark energy are connected through axion physics . In the absence of compelling theoretical guidance, there is a simple way to parameterize dark energy, by its equation-of-state . The uniformity of the CMB testifies to the near isotropy and homogeneity of the Universe. This implies that the stress-energy tensor for the Universe must take the perfect fluid form . Since dark energy dominates the energy budget, its stress-energy tensor must, to a good approximation, take the form |(3)| where is the isotropic pressure and the desired dark energy density is (for and ). This corresponds to a tiny energy scale, eV. The pressure can be characterized by its ratio to the energy density (or equation-of-state): which need not be constant; e.g., it could be a function of or an explicit function of time or redshift. (Note, can always be rewritten as an implicit function of redshift.) For vacuum energy ; for a network of topological defects where is the dimensionality of the defects (1 for strings, 2 for walls, etc.). For a minimally coupled, rolling scalar field, |(4)| which is time dependent and can vary between (when potential energy dominates) and (when kinetic energy dominates). Here is the potential for the scalar field. I believe that for the foreseeable future getting at the dark energy will mean trying to measure its equation-of-state, . 3.5 The Universe: the lab for studying dark energy Dark energy by its very nature is diffuse and a low-energy phenomenon. It probably cannot be produced at accelerators; it isn’t found in galaxies or even clusters of galaxies. The Universe itself is the natural lab – perhaps the only lab – in which to study it. The primary effect of dark energy on the Universe is on the expansion rate. The first Friedmann equation can be written as |(5)| where () is the fraction of critical density contributed by matter (dark energy) today, a flat Universe is assumed, and the dark-energy term follows from energy conservation, . For constant the dark energy term is simply . Note that for a flat Universe depends upon only two parameters: and . While is probably not directly measurable (however see Ref. ), it does affect two observable quantities: the (comoving) distance to an object at redshift , and the growth of (linear) density perturbations, governed by where is the Fourier component of comoving wavenumber and overdot indicates . The comoving distance can be probed by standard candles (e.g., type Ia supernovae) through the classic cosmological observable, luminosity distance . It can also be probed by counting objects of a known intrinsic comoving number density, through the comoving volume element, . Both galaxies and clusters of galaxies have been suggested as objects to count . For each, their comoving number density evolves (in the case of clusters very significantly). However, it is believed that much, if not all, of the evolution can be modelled through numerical simulations and semi-analytical calculations in the CDM picture. In the case of clusters, evolution is so significant that the number count test probe is affected by dark energy through both and the growth of perturbations, with the latter being the dominant effect. The various cosmological approaches to ferreting out the nature of the dark energy have been studied extensively (see other articles in this Yellow Book). Based largely upon my work with Dragan Huterer , I summarize what we know about the efficacy of the cosmological probes of dark energy: - Present cosmological observations prefer , with a 95% confidence limit . - Because dark energy was less important in the past, as , and the Hubble flow at low redshift is insensitive to the composition of the Universe, the most sensitive redshift interval for probing dark energy is . - The CMB has limited power to probe (e.g., the projected precision for Planck is ) and no power to probe its time variation . - A high-quality sample of 2000 SNe distributed from to could measure to a precision (assuming an irreducible error of 0.14 mag). If is known independently to better than , improves by a factor of three and the rate of change of can be measured to precision . - Counts of galaxies and of clusters of galaxies may have the same potential to probe as SNe Ia. The critical issue is systematics (including the evolution of the intrinsic comoving number density, and the ability to identify galaxies or clusters of a fixed mass) . - Measuring weak gravitational lensing by large-scale structure over a field of 1000 square degrees (or more) could have comparable sensitivity to as type Ia supernovae. However, weak gravitational lensing does not appear to be a good method to probe the time variation of . The systematics associated with weak gravitational lensing have not yet been studied carefully and could limit its potential. - Some methods do not look promising in their ability to probe because of irreducible systematics (e.g., Alcock – Paczynski test and strong gravitational lensing of QSOs). However, both could provide important independent confirmation of accelerated expansion. 3.6 Why now?: the Nancy Kerrigan problem A critical constraint on dark energy is that it not interfere with the formation of structure in the Universe. This means that dark energy must have been relatively unimportant in the past (at least back to the time of last scattering, ). If dark energy is characterized by constant , not interfering with structure formation can be quantified as: . This means that the dark-energy density evolves more slowly than (compared to for matter) and implies That is, in the past dark energy was unimportant and in the future it will be dominant! We just happen to live at the time when dark matter and dark energy have comparable densities. In the words of Olympic skater Nancy Kerrigan, “Why me? Why now?” Perhaps this fact is an important clue to unraveling the nature of the dark energy. Perhaps not. And God forbid, it could be the basis of an anthropic explanation for the size of the cosmological constant. 3.7 Dark energy and destiny Almost everyone is aware of the connection between the shape of the Universe and its destiny: positively curved recollapses, flat; negatively curved expand forever. The link between geometry and destiny depends upon a critical assumption: that matter dominates the energy budget (more precisely, that all components of matter/energy have equation of state ). Dark energy does not satisfy this condition. In a Universe with dark energy the connection between geometry and destiny is severed . A flat Universe (like ours) can continue expanding exponentially forever with the number of visible galaxies diminishing to a few hundred (e.g., if the dark energy is a true cosmological constant); the expansion can slow to that of a matter-dominated model (e.g., if the dark energy dissipates and becomes sub dominant); or, it is even possible for the Universe to recollapse (e.g., if the dark energy decays revealing a negative cosmological constant). Because string theory prefers anti-deSitter space, the third possibility should not be forgotten. Dark energy holds the key to understanding our destiny! 4 The Challenge As a New Standard Cosmology emerges, a new set questions arises. (Assuming the Universe inflated) What is physics underlying inflation? What is the dark-matter particle? How was the baryon asymmetry produced? Why is the recipe for our Universe so complicated? What is the nature of the Dark Energy? All of these questions have two things in common: making sense of the New Standard Cosmology and the deep connections they reveal between fundamental physics and cosmology. Of these new, profound cosmic questions, none is more important or further from resolution than the nature of the dark energy. Dark energy could well be the number one problem in all of physics and astronomy. The big challenge for the New Cosmology is making sense of dark energy. Because of its diffuse character, the Universe is likely the lab where dark energy can best be attacked (though one should not rule other approaches – e.g., if the dark energy involves a light scalar field, then there should be a new long-range force ). While type Ia supernovae look particularly promising – they have a track record and can in principle be used to map out – there are important open issues. Are they really standardizable candles? Have they evolved? Is the high-redshift population the same as the low-redshift population? The dark-energy problem is important enough that pursuing complimentary approaches is both justified and prudent. Weak-gravitational lensing shows considerable promise. While beset by important issues involving number evolution and the determination of galaxy and cluster masses , counting galaxies and clusters of galaxies should also be pursued. Two realistic goals for the next decade are the determination of to 5% and looking for time variation. Achieving either has the potential to rule out a cosmological constant: For example, by measuring a significant time variation of or by pinning at away from . Such a development would be a remarkable, far reaching result. After determining the equation-of-state of the dark energy, the next step is measuring its clustering properties. A cosmological constant is spatially constant; a rolling scalar field clusters slightly on very large scales . Measuring its clustering properties will not be easy, but it provides an important, new window on dark energy. We do live at a special time: There is still enough light in the Universe to illuminate its dark side. Acknowledgments. I thank Eric Linder for useful comments. This work was supported by the DoE (at Chicago and Fermilab) and by the NASA (at Fermilab by grant NAG 5-7092). References - See e.g., S. Weinberg, Gravitation and Cosmology (Wiley & Sons, NY, 1972); or E.W. Kolb and M.S. Turner, The Early Universe (Addison-Wesley, Redwood City, CA, 1990) - P. de Bernardis et al, Nature 404, 955 (2000); S. Hanany et al, Astrophys. J. 545, L5 (2000); C.B. Netterfield et al, astro-ph/0104460; C. Pryke et al, astro-ph/0104490 - D. Tytler et al, Physica Scripta T85, 12 (2000); J.M. O’Meara et al, Astrophys. J. 552, 718 (2001).
https://www.arxiv-vanity.com/papers/astro-ph/0108103/
Giant 3-D Map of 1.2 Million Galaxies Will Shed Light on ‘Dark Energy’ Scientists have developed the largest-ever three-dimensional map of distant galaxies to measure one of the universe's most mysterious forces. An international team of astronomers from the Sloan Digital Sky Survey III (SDSS-III) worked together to create the 3-D map of 1.2 million galaxies. The map was used to make one of the most accurate measurements yet of "dark energy," which is the force behind the accelerated expansion of the universe. "We have spent a decade collecting measurements of 1.2 million galaxies over one quarter of the sky to map out the structure of the Universe over a volume of 650 cubic billion light years," Dr. Jeremy Tinker of New York University, co-leader of the scientific team, said in a news release. The new measurements were carried out by the Baryon Oscillation Spectroscopic Survey (BOSS) program of SDSS-III. Shaped by a continuous tug-of-war between dark matter and dark energy, the map allows astronomers to measure the expansion rate of the universe and determine the amount of matter and dark energy that make up the present-day universe. The expansion rate is measured by determining the size of the baryonic acoustic oscillations (BAO) in the 3-D model of the galaxies. The original BAO size is determined by pressure waves that traveled through the young universe up until it was only 400,000 years old (the universe is currently 13.8 billion years old), at which point they become frozen in the matter distribution of the universe, scientists said. By measuring the distribution of galaxies using this time frame, astronomers can make precise measurements on how dark matter and dark energy have competed to govern the rate of expansion of the universe. "If dark energy has been driving the expansion of the Universe over that time, our maps tells us that it is evolving very slowly, if at all. The change is at most 20 per cent over the past seven billion years," Dr. Florian Beutler of the University of Portsmouth's Institute of Cosmology and Gravitation who was involved in the study, said in a report in Phys.org. Much of what astronomers know about the relative contributions of dark matter and dark energy comes from the leftover radiation from the Big Bang theory, which is called the cosmic microwave background (CMB). But the new survey allowed scientists to measure dark energy from before the previously defined 5 billion years, starting from 7 billion years ago up to near the present day, 2 billion years ago. The map also reveals the distinctive signature of the coherent movement of galaxies toward regions of the Universe with more matter, due to the attractive force of gravity. The observed amount of infall is explained well by the predictions of general relativity. The agreement supports the idea that the acceleration of the expansion rate is caused by a large-scale phenomenon, such as dark energy, and not a breakdown of our gravitational theory. A detailed description of the results of the study will be published in the Monthly Notices of the Royal Astronomical Society.
https://www.natureworldnews.com/articles/25310/20160715/giant-3d-map-million-galaxies-dark-energy.htm
Using the latest data from the Planck and WMAP satellites, the laboratory CosmoStat (LCS) of CEA-IRFU just provides the most complete and accurate picture of the diffuse microwave background of the universe considered to be the primary light emitted at the beginning of the expansion. The new map of the diffuse background was built thanks to a new method of separating components called LGMCA particularly well suited to the separation of galactic foregrounds that blur the background image. Unlike previous results, the map restores the details of the diffuse background across the entire sky including the Galactic plane region of the sky where the estimate is particularly difficult. It is also more effective in reducing the defects introduced by the existence of hot gas in galaxy clusters. These results are in press in Astronomy & Astrophysics and were presented at the conference "Horizon of Statistics" on January 21, 2014 at the Institut Henri Poincare (Paris). The cosmic microwave background or CMB (Cosmic Microwave Background) is a very low temperature radiation (2.7 K or -270 ° C) that fills the entire sky and is detectable in the field of millimeter waves or microwaves located between infrared and radio waves. It is interpreted as a "fossil" light emitted about 370,000 years after the beginning of the expansion when the universe became transparent and neutral. At this time called "recombination", the capture of electrons by atomic nuclei has left the field open to the light which thus became decoupled from matter. This fossil radiation has travelled through the whole universe since the recombination and therefore carries information on the early universe as well as the matter and energy content of the universe and the history of expansion since this time. Its analysis with very high precision is critical for the determination of cosmological parameters. In March 2013, the ESA Planck satellite has provided the most accurate measurements of the background radiation, following the data previously collected by the WMAP satellite of NASA. The map of the background radiation (CMB) provided by the scientists of the CosmoStat laboratory was obtained from the joint processing of Planck and WMAP data. Constructing an accurate map requires prior subtraction of all parasites foreground emission, coming largely from our own galaxy. Map of background radiation (CMB) in the direction of the center of our galaxy, reconstructed from the previously published algorithm SEVEM (left) and the new LGMCA method (right). The parasite contribution of the galactic plane is clearly visible on the map SEVEM and totally suppressed by LGMCA method. Credits SAp / CEA. To do this, the researchers used a statistical method of separation of original components, called LGMCA (for "local-generalized morphological component analysis") . This method, based on the parsimonious modeling of data, is particularly well suited to the separation of galactic foregrounds. It has brought significant improvements to the map of the fossil light of the Universe. First, it allows to reconstruct the entire map including the plane of our galaxy, where others methods used so far were forced to rebuild artificially these areas by using "inpainting" methods analogous to photographic digital retouching. It has also been shown that the new CMB map does not contain any detectable residues of the contamination by galaxy clusters called the "SZ-Sunyaev- Zel'dovich effect" . These improvements may seem small to non-speclalist eyes, but they are very important because the quality of the CMB map is of paramount importance for determining the overall characteristics of the Universe such as for instance its matter and dark energy content. In keeping with the philosophy of reproducible research, i.e. the ability for other researchers to reproduce the results, the laboratory CosmoStat also makes totally public the computer codes used for the estimate of the CMB map and its reconstruction . "The European satellite Planck has completed its first All-Sky Survey" (24 March 2010)" SZ effect : When the light of the background radiation passes through a galaxy cluster, it collides with electrons of the hot gas borrowing part of their energy. This is the SZ effect, for "Sunyaev-Zel'dovich," by the names of the two authors who predicted this effect for the first time in the late 1960s. In the CMB background, this effect shows up as a "hole" in the background map at a certain energy together whith a bright spot at a somewhat higher energy because the light gain of energy. Searching for this particular SZ signature in the Planck maps, is one of the best way to detect clusters of galaxies even when they are normally "invisible" because they emit too little light. But this effect has to be removed very accurately to evaluate precisely the cosmic microwave background.
http://irfu.cea.fr/en/Phocea/Vie_des_labos/Ast/ast.php?t=fait_marquant&id_ast=3438
Andrex » 03 Dec 2016, 16:38 wrote:I was talking of considering your picture as being a universe composed exclusively of massless” particles (before inflation). So you reckon all there was is light? My picture actually starts with "light only", since in the LCDM model, radiation dominated the energy density for the first ~50 thousand years after inflation. Before inflation? Nobody knows. I very much like deSitter space with a large cosmological constant before inflation, if inflation is needed at all. https://www.quora.com/General-Relativity-What-is-de-Sitter-space-Why-does-it-matter-for-cosmology wrote:Given these conditions, one can define de Sitter space to be the maximally symmetric solution to the vacuum Einstein equations with positive cosmological constant. Your impressions of what modern cosmology holds is not quite correct. Andrex wrote:Those stipulations (dark matter and dark energy) are needed because, we consider our universe being entirely “matter” (E=Mc2); when, in fact, we cannot observe more than 5% of “matter” in that universe. The exact “fact” is that our universe is 100% space-time, which 5% of it, is occupied by “matter”. This is THE “fact”. Physicists consider the present phase of the universe's energy density to be distributed as ~70% coming from the cosmological constant (Lambda, or something equivalent), ~25% from dark matter particles and ~5% from ordinary (baryonic) matter. These fractions change over time in a predictable fashion - at the time of the CMB (last scattering) we can deduce from observation that radiation made up ~25% of the energy density and total matter (dark and ordinary) ~75%, with the contribution due to Lambda quite negligible at that stage. Andrex wrote:So what is that imaginary problem of “critical mass” based on? Universe was born “flat” and there’s no more questions about it. It is a “fact”. It was “flat” simply because there was no “mass energy” involved at the time. It is a simple “fact” that cannot be refused. No, it is not that simple. Space must have been close to flat, but there is no "critical mass", just a critical density to expansion rate ratio. Both of the parameters have changed over time and is dependent on all forms of energy, including radiation and 'vacuum energy', a.k.a. Lambda. And finally, your discussion of the 'dark matter problem is quite far off the mark. So yes, you are wrong, but I do not quite know where to start to correct it. Have you read some of the recent papers on it? The history is interesting, but not that important. The now and the future are what count.
http://www.sciencechatforum.com/viewtopic.php?f=72&t=23060&start=90
The first assumption by Neu Theory is that the universe as observed is in physical homeostasis. This simply means that there is no definable beginning or end to physical reality. What we observe now is what we will always observe. The universe will appear the same to any observer, at any place, at any historical moment of time. This is also known as The Perfect Cosmological Principle. How can that be? How does the universe do it? Doesn’t the redshift of distant galaxies mean that the universe is expanding? This is a good question, and the answer is that we must first understand the physical nature of space and light before we can explain the cause of cosmological redshift. Space & Redshift in Current Science Until the early 20th century our Milky Way galaxy of stars, gas, and dust was considered the full extent of the universe. There were several fuzzy patches of light that clearly weren’t stars, but they were still considered part of the Milky Way. In the 1920’s Edwin Hubble using 100 inch Hooker Telescope at Mt. Wilson, California conclusively showed that these fuzzy patches were individual galaxies discrete from the Milky Way. Instead of a small universe filled with many stars, the reality was that there existed a much larger universe filled with many galaxies. In 1929 using his data, and the redshift data of these fuzzy patches previously measured by Slipher & Humason, Hubble formulated the Redshift Distance Law. Redshift is the reduction in frequency of photons from the original atomic frequency of emission. Redshift is also a corresponding increase in photon size (wavelength) and a decrease in photon energy. Hubble determined the distances to the galaxies by the use of a “standard candle” based on Henrietta Swan Leavitt’s period-luminosity relationship for Cepheid variable stars. A Cepheid variable star has a direct relationship between its absolute luminosity and its period, hence when we measure the star’s apparent luminosity and its period, we can calculate its distance away from the earth. Hubble’s law states that there is a proportionality between the redshift of galaxies and their distance from Earth. The energy lost by photons (the redshift) is directly proportional to the distance from which they were emitted. Doubling the distance, doubles the energy loss. This proportional redshift from distant galaxies was quickly inferred by scientists as a “Doppler” shift, meaning that the cause of redshift was the radial motion of the galaxies away from each other. The universe was expanding and the galaxies were separating and getting farther apart. The greater the distance between galaxies the greater the relative speed of their separation. This apparent velocity of recession is measured by the Hubble constant and can be expressed as a radial speed per unit distance. The currently accepted value is a velocity of recession roughly 70 kilometers per second (km/s) per megaparsec. One million parsecs (Mpc) is a distance of approximately 3.26 million light years and represents a convenient distance scale between galaxies; for example; the distance to our neighbor the Andromeda Galaxy is approximately 0.78 Mpc (2.5 million light-years) from Earth, the distance to the Virgo Cluster, the nearest large group of galaxies, is approximately 16.5 Mpc (54 million light-years) from Earth. It should be understood the Hubble constant is a measure of the universe on a larger cosmic scale. On a smaller galaxy scale the unit value of the constant at 70 km/s is smaller than the 220 km/s rotational velocity of the Sun around the center of the Milky Way approximately 8,000 parsecs (26,000 light years) away. The relative velocity of individual galaxies in groups and clusters of galaxies that are bound together can be larger than 1,000 km/s. Only as the distances increase does the Hubble constant begin to dominate. It is estimated that with distances around 4.5 gigaparsecs (14.7 billion light years) the radial velocity of separation between galaxies exceeds the speed of light. Specifically the Hubble constant is considered as the “Hubble Flow”, the rate at which space is expanding. This is considered a “metric expansion of space”, where the physical distance between galaxies keeps increasing with time. As a consequence the volume of the universe increases with time and the average density of matter on a large enough scale decreases with time. The Big Bang Theory of cosmology is based on universal expansion as a first assumption. It claims that based on this apparent rate of expansion, working backwards, approximately fifteen billion years in the past the universe began out of a singularity without volume that contained all the energy of nature. In the 1990’s astronomical observations of the brightness and redshift of distant galaxies using Type Ia supernova as “standard candles” showed that the light from the exploding stars was dimmer than expected for the distances calculated based on redshift. This is taken to mean that the “metric expansion of space” is accelerating. Despite the attractive force of gravity, space is expanding faster today than it did in the past. The cause of this acceleration is attributed to a unknown form of “dark energy” that permeates all space causing the universe to expand at an accelerating rate. In Current Science the future of the universe is uncertain and open to speculation. In Neu Theory the future of the universe is certain. What you see now, is what you will always see in the future. There is enough time in nature for everything to do what it can. Consider our own biologic existence which has taken billions of years of earth history in the making. That is the meaning of Cosmic Homeostasis. Space & Redshift in Neu Theory The question still remains: if space is expanding and the distance between galaxies is increasing as Current Science believes, how can the universe look the same over time as Neu Theory mantains? How does the universe do it? The Neu Theory answer is simple. Yes, space is expanding but not in the manner Current Science believes, because the absolute movement of space does not separate matter. The large scale physical distance between galaxies is not increasing, there is no “metric expansion of space” that carries galaxies along with it. What we call space, in Neu Theory, is the isotropic physical one-way expansion and diffusion-in-place of 0.87N zomons of free rise movement/energy at the accelerating speed of light. Space maintains a constant volume and density by a perpetual one way cosmic process of renewal. The large scale spatial distribution of the galaxies that make up the cosmic web, and as observed by us today is perpetually maintained-in-place, through the means of a universal self-regulating cosmic cycle that continually manufactures fresh space to replenish the expansion and diffusion-in-place of existing space thus maintaining a relatively constant universal volume, and that is the reason the universe will always look the same. The phenomena of cosmological “redshift” has a different cause than a “velocity of recession” that increases the physical distance between objects. The truth resides in our understanding the nature of atoms, space & light. The universe is expanding but not in the manner of current scientific theory. Space is accelerating but not in the manner of current scientific theory. Neu Theory provides a model where atoms, space and light – the three principle components of the cosmos – are perpetually accelerating or expanding-in-place yet maintain a consistent relative size and spatial distribution. The nature of the acceleration and expansion and its effect on the universe as a whole is different for each component. The Universal Acceleration of Atoms The universal acceleration of atoms is the acceleration-in-place of the matter and electric charge shells they are made of. The acceleration-in-place of matter is the individual g-spin and g-rise of the three primary matter objects (neutron, proton, electron) and the collective cosmic g-spin and g-rise of all number collections of the universal number N acting as a whole. G-rise is purely a function of number and volume and results in the g-force and the spinfield and hyper-spinfield effects. The acceleration of matter is a non-visual physical acceleration that is experienced as a force by other matter objects that are themselves physically accelerating at their own rate. The relative size between the self-rising matter objects and their large scale spatial distribution remains the same. The acceleration-in-place of electricity is the g-fall compression of the positive and negative charge shells as they surround the spinning cores and membranes. With nuclides the positive charge shell layers act as tension bands that hold the non-spinning neucleons clustered together. The 0.000833u of topologically mirror split spin movement/energy creates a strong attractive or repulsive electric field in the space between the positive and negative charge shells. All the neu numbers (and their collections) of the universe are forever riding the crest of the same wave of universal acceleration in synchronized harmony. The acceleration never stops, slows down or changes direction. Natural acceleration provides the motive power that drives nature. The Universal Accelerating Expansion and Diffusion-in-place of Space Space is the one-way movement (expansion and diffusion-in-place) of free rise energy at the uniformly accelerating speed of light throughout the universe, not the puny 70 km/mpc Hubble flow. The movement of space (zome) is always at an absolute speed of 299 792 458 meters per second perpendicular towards and away from matter, whatever matter’s kinetic motion. The physical distance between matter objects is not changed by the expansion and diffusion-in-place of space. However, the large scale physical distance can increase or decrease by a change in density of space. Distant galaxies remain the same distance apart, more or less, with the same random motion as long as space energy maintains its volume and density. The in-place-expansion and diffusion of the zome body is continuously being replenished by the “eternal springs” of fresh zomons during the active galactic nuclei (AGN) phase of the galactic matter cycle. Based on observation, there are an estimated 7 %* of total galaxies at one time in the AGN phase. If this is true, than apparently one in every thirteen galaxies in the universe is currently in the space manufacturing phase of the galactic matter cycle. The AGN emission provides a fresh supply of neutral a-state neutrons which quickly transform into the electric b-state protons and electrons releasing a spectrum of free rise energy pulses (zomons) that are needed to balance the universal one-way in-place-expansion and diffusion of free rise energy (zome). Free rise energy can only expand and diffuse one-way, and must topologically remain within the open-hollow volume made of 0.87 N zomons, meaning in principle there is no space (zome) beyond the openhollow. To maintain cosmic homeostasis, the entire universal industrial complex of an estimated 500 billion galaxy factories must continually manufacture as much space as needed using the galactic matter/energy cycle as the means of production. The zomon production rate needs to match the zome in-place-expansion and diffusion rate. Only then can the universal volume of zome be maintained at a constant volume, pressure, and density. Space is embedded with charge shielded matter in kinetic motion. All matter objects that are not charge shielded, such as the free neutron, or core matter collision fragments, are unstable. The neutron will spontaneously transform into the charge shielded proton and electron objects that are stable; and the collision matter fragments will eventually find nearby charge shielded nuclides to rejoin creating a chain-reaction of instability until some stable charge shielded nuclear forms are made. There are two forms of matter embedded in space, neutral atoms & charged particles: - Neutral matter consists of the cosmic web of atomic matter, bound in individual galaxy hyper-spinfields and multiple galaxy clusters, and including the neutral atoms of inter-galactic dust and gas. - Charged matter consists of bound ions in magnetic fields, and the isotropic shower of free ions (cosmic rays) that traverse the web. It is hypothesized that space interacts in a different manner with the neutral and charged matter embedded within it: - Space decelerates the kinetic motion of unbound neutral matter. Unbound matter is matter that is not orbiting within a spinfield. This may be an explanation for the deceleration relative to the Sun by the two Pioneer Spacecrafts. - Space accelerates the kinetic motion of unbound charged matter such as cosmic rays that travel a long time and distance in the cosmic web. The longer the charged nuclides travel the closer they get to the speed of light. This may be the explanation for ultra-high energy cosmic rays. Space is filled with photons of light that are emitted and absorbed by atoms. The boson photons make a “continuum” of free spin energy bubbles that fill and are carried along with the one-way movement of space. Each photon is a unique physical entity with its own life history. The Expansion of Light The expansion of light is “redshift”. The isotropic free spin movement/energy of the photon does not naturally accelerate. As it interacts with the expanding (and accelerating) free rise movement/energy of zome, all cosmic light expands, losing energy that is eventually measured by us on Earth as redshift. The entire radiation spectrum continuously shifts towards longer wavelengths. All photon bubbles are getting larger. Their individual rate of expansion – compared to their moment of birth – increases with age. Cosmic light is the universal isotropic flux of a large number of photon “packages” of all sizes (a wavelength continuum) being bosonically transported from place to place by the “common carrier” Zome – the cosmic “FedEx” for light – at the accelerating speed of space. The speed of space “z” is identical to the speed of light “c.” Cosmic light can be visualized as a warm gas of discrete photons with different colors (energy) that are being individually carried in unique directions by the isotropic expanding-in-place open-hollow of space. Photon bubbles, unlike the three primary matter objects, the neutron, proton, and electron, do not displace or add to the volume of space, they add to its spin energy density. The bubble’s energy remains as a coherent package being bosonically transported by space from its emission matter source in a specific direction (relative to the emitter) until its eventual receipt by another matter object. The matter objects including their charge shells are embedded in space. Space expands and diffuses-in-place at the accelerating speed of light, up to but never through or around the charge shielded matter objects. Space, matter, and charge are discrete, each adding their own form of physical volume to the universe. Photons come from atoms, are carried along by the movement of space until absorption by other atoms. The photon bubble is a free spin energy addition on top of the free rise energy of space. The two energies do not mix, they remain discrete as they interact. The volume of space is not increased by a photon bubble, no matter how dense the photon flux, such as in a laser beam or immediately surrounding an exploding star. Redshift is the continuous decrease in relative energy of all photon spin bubbles in the universe, one quantum spin at a time, from the internal pressure of zome that is expanding and diffusing-in-place within the bubble. The isotropic spin energy of the photon bubble, as long as it exists, is a closure motion at the non-accelerating speed of light, but it is still only physical motion, not physical acceleration like the free rise energy of zome. There are two causes for redshift: - The first cause of redshift is the right angle interaction between the photon’s spin closure energy bubble and the internal pressure on the bubble from radial expanding-in-place rise movement/energy that makes the bubble larger and results in the increase in closure time (a decrease in frequency) of the photon, thereby making the bubble have less energy, or “red shifted” from its starting value. The longer the time of travel by the photon, the more time for zome pressure to act, and the greater the redshift. This uniform decrease in spin energy of photon bubbles is the measured Hubble redshift. Unlike the current scientific view, the measured physical distance between the emitting galaxies and Earth will always remain more or less the same and is not increasing with distance. This is true for all galaxies within the cosmic web. - The second cause of redshift is the additional long term effect of uniform universal acceleration (the aging factor) on individual photons. Photons that were born in galaxies billions of light years away, have lived and traveled a long time with expanding and diffusing-in-place Zome. They uniformly lose energy due to constant internal zome pressure, but they also lose energy relative to the long term natural acceleration of Zome, a continuous increase in pressure. The rate of universal acceleration a is a small fraction of the speed of light c, but after a certain historical time period the speed of Zome z will accelerate until it is double the speed at the moment of the photon’s birth and keep on doubling. The photon bubble starts with the historical diameter (wavelength) of its birth, and its free spin energy – which obeys the 2nd law of thermodynamics – and does not accelerate with time. The inverse of the wavelength is the photon’s frequency, and its energy is proportional to the frequency multiplied by Plank’s constant h. The longer a photon lives, its individual rate of aging (which is relative to the speed of space at the time of its birth) keeps getting faster. This accelerated aging effect on a photon is caused by the acceleration of the movement of space, a increasing additional pressure, that adds to the photon’s uniform energy loss from the constant internal pressure from an expanding-in-place zome with a maintained density. The uniform rate of acceleration a of Zome is a small quantity relative to the current physical speed of c at 299 792 458 meters per second. Neu Theory does not provide a value for the rate of uniform universal acceleration, it must be empirically determined. Despite the small value of this perpetual acceleration, after enough time it will double the speed of z, although the measured speed of light will always remain at c. For short distances, while still present, the effect of acceleration on the redshift of a photon is not significant compared to the uniform energy loss from internal zome pressure. However, the longer distance a photon travels, the greater the added effect on redshift caused by acceleration. Based on the observed redshift of distant galaxies, after a photon has traveled approximately 5 billion light years, the increasing effect of acceleration on redshift has become significant. Universal acceleration is the cause of the additional redshift of photons from distant Type Ia supernova. Dark energy is not required. The Life of a Photon Consider the universe from the point of view of a single photon. Just like us, a photon is a “real” thing during the duration of its existence, and similar to the way we consider ourselves on Earth, the photon is and will remain however far it travels, the center of its universe. A photon has measurable physical properties, and a direction of travel. It has a moment of birth from some atomic or nuclear event involving matter, and it has either: - A moment of death when it is absorbed by matter, as its free spin energy becomes the kinetic rise energy of matter. - Or speculatively, should the photon live and expand long enough, an ever-lasting life as it reaches the cosmic microwave background (CMB) spin energy resonance that fills space. At the resonance the photon loses its individual identity, meaning it can no longer be associated with a specific source, and becomes part of the maintained universal isotropic body temperature of the cosmos at ∼ 2.73° K. Photons are being absorbed and emitted by the entire cosmic body in homeostasis. One can only speculate on the physical process that maintains the CMB resonance peak wavelength. It is hypothesized by Neu Theory, that unlike the current scientific view, the peak wavelength will stay constant with time, not decrease. Perhaps, similar to its effects on galaxy spinfield orbits, the g-spin of the cosmos provides a natural harmonic accelerating floor to the free spin energy of the photon bubbles at that wavelength. What is being accepted, is that the entire radiant spectrum from all the discrete sources in the universe, e.g., “little bang” photons, stellar radiation (including the radio frequencies whose photons are much larger than CMB photons), are red shifting into the spin energy continuum. Even the isotropic CMB photons must redshift as they cluster around a natural peak. All photons follow the same set of rules, there are no exceptions. Light Comparison Table |Name||Wavelength (m)||Frequency (Hz)||Closure Time (s)||Photon energy (eV)| |Gamma ray||less than 0.02 nm||more than 15 EHz||less than 21 as||more than 62.1 keV| |X-ray||0.01 nm – 10 nm||30 EHz – 30 PHz||10 as – 10 fs||124 keV – 124 eV| |Ultraviolet||10 nm – 400 nm||30 PHz – 750 THz||10 fs – 4.19 fs||124 eV – 3 eV| |Visible||390 nm – 750 nm||770 THz – 400 THz||4.08 fs – 7.85 fs||3.2 eV – 1.7 eV| |Infrared||750 nm – 1 mm||400 THz – 300 GHz||7.85 fs – 10.5 ps||1.7 eV – 1.24 meV| |Microwave||1 mm – 1 meter||300 GHz – 300 MHz||10.5 ps – 10 ns||1.24 meV – 1.24 µeV| |Radio||1 m – 100,000 km||300 MHz – 3 Hz||10 ns – 1s||1.24 µeV – 12.4 feV| The open-hollow is filled with a continuum of radiation that is perpetually expanding-in-place, as it travels, into longer and longer wavelengths. The continuum of radiation is being added to by the atomic and nuclear emission of photons at discrete frequencies from stellar sources, as well as the emission of photons from all the other “kinetic heat” sources in nature. A large number of photons come into the open-hollow from the atomic electro-kinetic bond of hydrogen at the closure of the little bang phase after AGN emission. The hydrogen photons are emitted at different discrete frequencies depending on the energy state difference between electron transitions. These are the well known Lyman, Balmer, Paschen series of spectral lines that are observed. Galaxies in the AGN phase are more or less uniformly distributed throughout the open-hollow. Over extended periods of time this contributes to the isotropic homogeneity of the CMB. The smallest wavelength photons are “gamma rays” from nuclear events. Photons of most frequencies can be made in the laboratory. The Cosmic Open Hollow The Cosmic Open-Hollow is a maintained volume of space filled with light. The rise energy of space , and the spin energy of light are two different forms of energy that bosonically coexist together, albeit at a cost in energy for light. The Open-Hollow is also embedded with atoms which are objects made of matter with the physical movement/synergy forms of magnetism and the g-rise/spinfield . Matter is “jacketed” by electric charge shells [6+][6-] with electric fields [7+][7-], and matter is in motion . These fundamental forms of nature are physically bound together and act as one. Together these five physical quantities – space, light, matter, electricity, motion – act in a manner that maintains perpetual cosmic homeostasis. How do they do it? The key is the continuous creation of space. Matter and motion recycle electricity through the means of a galactic atomic matter cycle, which periodically releases fresh burst of space into the open-hollow that replaces the expansion and diffusion-in-place of space at the accelerating speed of light. The light bubbles (radio waves to gamma rays) that continually traverse space are products of atomic electrical interaction, and nuclear matter transformations. The space expansion and diffusion-in-place rate can be graphically calculated as the time at the speed of light for the open-hollow to double in volume. (Figure 5.4). The time is calculated as the difference between the open-hollow boundary radius (R1) and the imaginary doubled volume radius (R2) divided by the speed of light. Figure 5.4 – The Cosmic Whole From the “outside”, an imaginary, not a physical perspective, the open-hollow boundary is a topological and physical extent that can be schematically represented as a surface of a spherical bubble that contains all the atoms, space, and light of the universe. The “outside looking in” is a purely fictitious perspective, as in principle one can never be physically outside the universe. The only true physical perspective is an “inside looking out.” The bubble is imagined as maintaining a relatively constant volume with a fixed radius (R1). The physical size of the volume is determined by observation and theory. There is no physical universe beyond the open-hollow boundary, only a topological void. The model defines a void as a place without matter or energy. The topological void surrounding the open-hollow, is given a number exactly equal to the number of electron topological voids at any time. As the universe expands into the void, metaphysically it expands into itself. Space and light fill the bubble with uniform densities that can be estimated. A fixed amount (N) of matter is embedded within the volume as a cosmic web of galaxies, gas, and dust, and an isotropic shower of cosmic rays. The model considers the open-hollow boundary as a topologically intact spherical surface that contains the expansion and diffusion-in-place of space. This can be considered a one-way movement into the future and away from the past. There is no space (zome) beyond the open-hollow boundary. As a thought experiment, imagine the bubble surface expanding or contracting as the total volume of space changes. The total volume of space is maintained from the average volume of approximately 0.87N zomons that are being recycled by the universal matter cycle. As the individual zomons expand and diffuse-in-place, fresh zomons are released within the bubble that equals the diffusion-in-place maintaining a constant universal volume and density of space. If the average volume of space per atom gets larger, the universal volume gets larger, and the rise energy density of space decreases, less pressure. If the average volume of space per atom gets smaller the universal volume gets smaller and the rise energy density of space increases, more pressure. For cosmic homeostasis to occur the infusion of fresh zomons must equal the universal expansion and diffusion-in-place of zome. How does one measure universal diffusion-in-place? As another thought experiment, imagine the bubble to have a “leaky” surface. The universe is analogous to a permeable membrane that leaks space through its entire hollow surface into a topological void at the speed of light. For the open-hollow to maintain its homeostasis volume, as much fresh space has to be manufactured within the open-hollow volume, as is topologically “leaking” through the open-hollow boundary surface. Of course in physical reality there is no leaking, as this would mean that energy is escaping outside the universe, which in principle is not allowed. There is only diffusion-in-place of space with time inside the open-hollow volume, the total energy remains constant. The purpose of this thought experiment is to estimate the time it would take to double the volume of the universe at the speed of light. The larger the initial volume the longer it would take for the universe to double. It should be noted that the universe doubling, similar to the g-rise doubling of a matter object, is a purely topological concept, useful for calculating the amount of fresh space that is required to replace the existing space that is expanding and diffusing-in-place with time throughout the open-hollow volume. From the “inside”, the open-hollow boundary is an imaginary surface, receding away from all observers at the accelerating speed of light. Physically, the recession of the bounding surface, does not make the universe measurably larger as the distance between objects does not increase. The open-hollow boundary physically contains all the matter, energy, space, and light of the universe. The open-hollow volume (Z) is made from an average volume (z) associated with each zomon. A number of zomons equal to Z must be released in the doubling time to maintain space homeostasis. With Z set equal to 0.867N, and N set equal to 3.0 x 1079, makes Z equal to 2.6 x 1079 zomons. It should be emphasized that the number of zomons is directly proportional to the universal b-state (electric) number. This is entirely based on the actual distribution of atoms in nature. If the electric number changes the zomon number will correspondingly change. In the Neu Theory model, the galaxy core supercell neucleon mass, has at least as many neutral numbers as electric numbers as it is made of deuterons and neutrons. Supercell core mass represents a significant quantity of universal matter, therefore it is possible the actual universal b-state is less than 0.87N, and perhaps more like 0.8N. However for our first calculation we will use 0.87N. The galaxy supercell core releases fresh free a-state neutrons which spontaneously little bang into the b-state releasing fresh pulses of space. The spectrum of space pulse energies balances the kinetic energy of the electrons and cores keeping the total energy per little bang at 0.78 MeV. It has been observed that the average energy of the emitted electrons in beta decay is ~1/3 of the invariant mass loss, approximately 0.26 MeV. Based on this fact, Neu Theory estimates the average zomon pulse carries the remaining 0.52 MeV, representing ~2/3 of the total mass delinked. With this average value the total maintained energy of Z is equal to ~ 1.35 x 1079 MeV. This represents approximately ~ 0.05% of the total rise movement/energy value of the universe calculated at ~ 2.8 x 1082 MeV. Using the NASA value of one atom per 4 cubic meters and a b-state number at 0.87 N, each zomon has an average volume of 4.6 m3. This gives our model universe an approximate volume of 1.2 x 1080 m3, equal to a ball with a radius (R1) of 3.0 x 1026 m, or 32.3 billion light years. A ball with double that volume has a radius (R2) of 3.8 x 1026 m radius or 40.7 billion light years, a difference of 8.4 billion light years. It is theorized that in 8.4 billion years a volume of 1.2 x 1080 m3 made from 2.6 x 1079 zomons will have topologically diffused-in-place with time, or diagrammatically “outside”, the open-hollow boundary of 32.3 billion light years as shown in Fig. 5.3. To maintain a homeostasis cosmic volume of zome an equal number of zomon space bursts will need to be freshly infused into the universe in 8.4 billion years. This implies that 2.6 x 1079 neutrons have to be recycled by galaxy supercell ejections within the cosmic open-hollow volume during a 8.4 billion years time period. Using these values, the rate of ejection would be ~ 3.1 x 1069 neutrons per year, or 9.8 x 1061 per second. With a solar mass number of ~ 1.2 x 1057, makes this equivalent to a perpetual average ejection of ~ Eighty Two Thousand (82,000) solar mass number of neutrons every second throughout the cosmos. Do astronomical observations support these numbers? Are there enough active galactic electric supercell cores (AGN) in the cosmos that are ejecting neutrons in sufficient quantities to replace the estimated universal volume of zome topologically expanding at the speed of light? In one study* it was estimated that ~ 7% of observed galaxies are active, 93% are normal. This provides 3.5 x 1010 active galaxies from an assumed total of 5.0 x 1011 galaxies in the universe. At any moment approximately one out of thirteen galaxies in the universe is in its active phase. There are many types and sizes of active galaxies observed. What many have in common is a bright light emitting region a few light days from the central core that is in the Balmer series of spectral lines indicating the presence of hydrogen. This is consistent with the ejection of neutrons hypothesis, the subsequent little bangs, and the synthesis of hydrogen. With a required cosmic ejection rate of 9.8 x 1061 neutrons per second, on the average each of the estimated 35 billion active galaxies would need to eject ~ 2.8 x 1051 neutrons every cosmic second, to maintain homeostasis. Surprisingly this is less than the mass number of the earth ~ 3.57 x 1051. Doesn’t seem like much. Ultimately it is observation and reason that will determine the truth or falsification of any theory. There is more work needed to show if the Neu Theory model matches observation. The theory uses values that must be accurately determined. If the values inputted into the model are changed the model will correspondingly adjust.
https://www.neutheory.org/the-recycling-universe-hypothesis/cosmic-homeostasis/
Have cosmologists discovered evidence of inflation? 29 March 2006 In 1948, George Gamow, using the big bang model, predicted the existence of the cosmic microwave background (CMB), sometimes referred to as the cosmic background radiation (CBR or CMBR). In 1965, Arno Penzias and Robert Wilson discovered what had been predicted, and for their CMB findings, they received the Nobel Prize in physics (1978). The cosmic microwave background (CMB) supposedly arises from an era that took place about 400,000 years after the big bang, when matter had cooled to a temperature of approximately 3,000 K, which allowed electrons and protons to combine to form stable hydrogen atoms for the first time. Prior to this hypothetical “age of recombination”, photons of light could not travel far before they were absorbed by the electrons. This made the universe opaque. After this time, the universe would have been transparent, allowing photons to decouple from matter and pass mostly unhindered through space. Today, we see the radiation from the “age of recombination” coming from all directions after it had traveled billions of light years, but since the universe has expanded about a thousandfold since, this distant radiation has cooled by a factor of a thousand to about 2.73 K. Because the steady-state model, the big bang’s primary competitor in secular astronomy, could not accommodate the CMB, the big bang has thus been the standard cosmogony for the past four decades. As difficulties have arisen for the big bang model, cosmologists have liberally modified the theory to meet each challenge. For instance, in 1981 Alan Guth proposed the idea of “inflation” to solve the horizon and flatness problems. Inflation posits that very shortly after the big bang, the universe underwent a very rapid expansion to a much larger size. Descriptions of the process vary, but typically, inflation supposedly happened about 10–35 seconds after the big bang, during which the now-visible universe expanded from perhaps the size of a proton to the size of an orange. Note that inflation is far faster than the speed of light, and that the normal rate of universal expansion that we see today prevailed after inflation. Cosmologists generally agree that inflation has occurred because it handled the flatness and horizon problems so well. But is there any evidence for inflation? No, but on 16 March 2006, NASA posted a story that answered, “Yes.” A quote from the story states, “Scientists peering back to the oldest light in the universe have evidence to support the concept of inflation.” What did the scientists find? In 2001, NASA launched the Wilkinson Microwave Anisotropy Probe (WMAP). WMAP contained two basic types of instruments to measure slight spatial variations in temperature and polarization in the CMB. In the big bang scenario, the early universe must have contained slight variations in density that eventually gave rise to structure, such as galaxies. If the universe were initially too smooth, there would be no structure, and if it were too clumpy, nearly all matter would have been gobbled up into black holes. Either way, we wouldn’t be here to observe the universe. The slight variations in densities in the early universe would manifest themselves as slightly cooler and warmer regions in the CMB. In 1991, the Cosmic Background Explorer (COBE) was barely able to detect the temperature variations, but the temperature probes aboard WMAP were able to map the temperature variations in much greater detail. The WMAP polarization experiments were designed to measure something more subtle. Being a wave phenomenon, light can be polarized. That is, light can vibrate in preferred directions. Most light is unpolarized, but various mechanisms can introduce some polarization. The matter clumping in the early universe ought to manifest itself to a degree in what physicists call “E-mode polarization”. WMAP has found evidence of this. However, B-mode polarization ought to arise from gravity waves resulting from inflation. Has WMAP found B-mode polarization? Quoting from the NASA website: “WMAP detected E-mode polarization but not B-mode yet.” So despite the claim made by the press release and the website, there is no evidence of inflation. What is going on then? Cosmologists now regularly take data from very different experiments and combine them into a single result, though press results rarely discuss the input of the disparate data. An example of this was the February 2003 announcement of the latest 13.7 billion year age estimate of the universe, along with estimates of the percentages of mass distributed amongst lighted and dark matter and “dark energy”. Also left unsaid is how extremely model-dependent the conclusions are. That is, if we change the model slightly, the conclusions change as well. The recent claim of the discovery of evidence for inflation builds upon the earlier WMAP work, among others, and, like the others, is very model-dependent. For instance, how the observed E-mode polarization constrains the amount of inflation energy is model-dependent. The model dependence amounts to a type of circular reasoning—cosmologists interpret the data assuming inflation, and then used the data to support inflation. It appears that the claim that we have found evidence of inflation is overstated. At best, the evidence is very indirect and to the point of being premature. So, why all the fanfare now? In a few years, new experiments currently underway ought to measure B-mode polarization directly. However, even if B-mode polarization is found, the conclusion that it must result from inflation will be model-dependent. Inflation is such a foundation for modern big bang cosmogony that it is almost unthinkable among cosmologists that it might not exist. Thus the claim of first discovery of evidence for inflation carries much reward when compared to the risk of eventually being proved wrong.
https://creation.com/have-cosmologists-discovered-evidence-of-inflation
What is the fundamental ‘substance’ of the universe? Today, the original wave-particle debate is often subsumed into quantum theory, where not only do electromagnetic waves have particle-like features, but particles also have wave-like features. In this context, accepted science might be said to have already embraced the idea that matter particles may only represent a concentration of energy, as defined by Einstein’s famous equation [E=mc2], such that the idea of material ‘substance’ has to evaporate at some level within the microscopic universe. By the same token, waves might also be generalised as energy propagating between 2 points in space-time. Therefore, in this section, we may need to start qualifying the scope and meaning of previous classical notions of physical particles, for in many ways, it becomes increasingly difficult to maintain this idea, especially at the sub-atomic level, where the substance of the particle appears to carry no real meaning. Of course, if we accept this position, we might also have to question the inference of a particle model that is still used to describe the interactions within the sub-atomic domain. In this context, the classical model of particle physics attempts to describe the building blocks that give form to the universe, although the quantum particle model subjects the original idea to much revision. However, while the diagram above may appear to be all encompassing, it only really covers about 4% of the ‘substance’ within the universe with the additional realisation that only about 0.0000000000000000000042 percent of the volume of the universe contains any matter, as we understand it. So what is the remaining 96% of the 'matter' universe made of? Well, currently there is a theoretical assumption that 22% exists in the form of dark matter, while the remaining 74% is in the form of dark energy. However, neither dark matter nor dark energy is shown on the diagram above, because the composition is essentially speculative and unknown. In the context of cosmology, dark energy is a ‘substance’ that is assumed to uniformly fill the entirety of space and to be the source of a repulsive force causing the universe to expand. When was the standard model established? However, over time, the standard model has grown to define an increasing number of particles, i.e. 200+; although most interactions are now described using only 17 fundamental particles, i.e. 6 types of quarks, 6 types of leptons, 4 types of force-carrying bosons and a hypothetical Higgs boson. However, all matter particles have associated anti-matter particles, e.g. the electron counterpart is called a positron. Within the particle model, there are four known forces carried by boson particles. The strong and weak forces essentially exist only within the atomic nucleus and bind sub-atomic particles within the atomic structure as a whole. While gravity is essentially the only force to scale to the macroscopic universe, forces such as friction and pressure, along with all electric and magnetic interactions between charged particles are due to electromagnetic forces. The electromagnetic and gravitational forces are both subject to an inverse square law of distance, which implies that these forces theoretically extend to infinity, while the strong and weak nuclear forces are essentially restricted to distances on the scale of the atom. How does the particle model align to relativity and quantum theories? The standard model is said to be consistent with both quantum mechanics and special relativity, but gravity is still excluded from this model, because the graviton force particle has never been observed. However, there is one other critical force-particle, the Higgs boson, which is still subject to questionable verification, that is critical to the description of mass within the constraint of a particle model. In this context, the Higgs boson is said to cause the interaction between particles that accounts for the effect of mass; although it does not explain why the photon has to be described as a massless force particle. The standard model is also said to align with Quantum Field Theory (QFT), which is in-turn based on the ideas contained within Quantum ElectroDynamics (QED) that is thought to explain how electrons, positrons & photons interact. When QFT is discussed in the context of the strong force, it is associated with Quantum ChromoDynamics (QCD), which is said to explain how quarks & gluons interact to form other composite sub-atomic particles, e.g. protons and neutrons. A free neutron, i.e. outside the atomic nucleus, has a mean lifetime of ~15 minutes, before it typically decays into a proton plus electron plus an electron anti-neutrino. The decay of a proton outside of the nucleus has never been observed, but is calculated to have a mean lifetime of not less than 1036 years. OK, but what about the description of the universe as a whole? In mass terms only, about 85% of the universe is not accounted for by any of the particles in the standard model, other than a somewhat conceptual description of dark matter. According to the Big Bang theory, our universe resulted from a ‘singularity’ some 13.7 billion years ago, although the idea of the singularity itself cannot really be described by any accepted physics. Within this cosmological model, the 4 fundamental forces, as perceived today, are thought to have remained a unified force for the first 10−43 seconds, after which gravity and the strong nuclear forces separated from the other two forces. Then, after 10−12 seconds, the electromagnetic force separated from the weak nuclear force, leaving the small, but rapidly expanding universe to consist of a hot quark-gluon plasma, which included leptons and anti-particles. After 10−6 seconds, hadrons began to form, although most hadrons and anti-hadrons were eliminated by mutual annihilation, leaving only the 1/billionth residue of hadrons after the first second of existence. What does accepted science tell us about the next 13.7 billion years? Well, within the spread of a number of different variants of the Big Bang model, between 1-3 seconds after the expansion of the singularity, the universe continued with the mutual annihilation of leptons and anti-leptons until only essentially leptons remained. As a by-product of this mutual annihilation process, the universe was super-hot and dominated by photons. However, various models suggest that after some 3-20 minutes, protons and neutrons would have begun to combine to form atomic nuclei, but that the resulting plasma of electrons and nuclei, i.e. ionised hydrogen and helium, would have existed in this state for some 300,000 years until the temperature dropped to about 5000ºC. At which point, the lower energy levels would have allowed hydrogen and helium atoms formed. Note, if the amount of matter and anti-matter had been totally symmetrical, the subsequent mutual annihilation would have resulted in a universe consisting of only photons. However, current models assumed that for every billion annihilations, a single particle of matter remained, which now accounts for all the matter we see in the universe. Today, about 99% of the photons in the universe exist in the form of what is called the Cosmic Microwave Background (CMB), which is believed to be the residue energy of the mutual annihilations that took place some 13.7 billion years ago. Originally, the temperature of CMB plasma would have been measured in terms of millions of degrees, but which has now cooled to an ambient temperature that is less than 3 degrees above absolute zero. In comparison, the number of photons ever radiated by all the stars in the universe is trivial. But what level of verification exists for this model? The evidence for dark matter primarily exists in the form of gravitational anomalies, e.g. the rotation of galaxies. The scale of these anomalies suggests that dark matter has to be a dominant form of mass in the universe, which not only caused hydrogen to coalesce into stars, but became the primary binding force that holds galaxy together. In order to fit current observations, dark matter cannot interact with the electromagnetic force, which is why it is so difficult to detect outside the scope of the gravitational anomalies, which therefore remains the primary evidence for the existence of dark matter. As such, dark matter is assumed to be non-baryonic, which simply means that its composition does not align to the standard model. Some physicists continue to argue that dark matter does not exist and that science should consider the possibility that the current theory of gravitation may need to be revised in order to explain the observed gravitational anomalies. For now, it is probably fair to say that the concept of dark matter is still an unverified idea. However, it is possibly more true to say that the evidence for dark energy is even more circumspect in that its existence is based on a more tenuous observation related to the rate of expansion of the universe based primarily on redshift observations. However, the details behind this issue will be deferred to the main discussion of cosmology. While there is no shortage of alternative models, all are essentially conceptual and most suffer from a comparable lack of verifiable evidence. As such, this section will try not to confuse the status of accepted science by introducing too many ideas outside what might be called mainstream science. However, more tentative theories will be discussed in the next section entitled: Speculative Direction. As an alternative to the particle model, string theory was one attempt to describe the building blocks of the universe in terms of strings of energy rather point particles. The general goal being to try and unify relativistic quantum field theory with the theory of general relativity at the Planck scale, i.e. 10−35 metres, where Einstein's equations of general relativity essentially breakdown. However, the idea introduces much theoretical complexity, not least, being the idea that strings vibrate in ten dimensions, six of which are tightly coiled on the Planck-scale that cannot be physically verified. Such abstractions has led some scientist to highlight the dangers of hypothesis being evaluated, if not verified, solely on the basis of mathematical models, independent of any substantive verification within the physical universe. So what is the scope of accepted science? In truth, the scope of accepted science is now so enormous that it requires entire libraries dedicated to the myriads of sub-branches; each of which is considered a specialist subject in its own right, requiring years of study and training to comprehend. As such, the selection of relativity, quantum mechanics and cosmology represents little more than a litmus test of some of the issues that are intended to complement the previous discussion of the Scientific Perspective in the context of Worldviews. In order to help wider distribution and review, the web based discussion of accepted theory of science has/will been reproduced in a series of PDF file. Note: missing files will be added when and when completed. "Modern science is characterized by its ever-increasing specialization, necessitated by the enormous amount of data, the complexity of techniques and of theoretical structures within every field. Thus science is split into innumerable disciplines continually generating new sub-disciplines. In consequence, the physicist, the biologist, the psychologist and the social scientist are, so to speak, encapsulated in their private universes, and it is difficult to get word from one cocoon to the other..."
http://mysearch.org.uk/website1/html/235.Accepted.html
The last tests of the Ariane 5 rocket system have been finished and ESA's Planck satellite is sitting ready for launch at the Guiana Space Centre in Kourou. Together with ESA's space telescope Herschel, Planck will lift off into space on 14 May to begin its studies of the cosmic microwave radiation and of the clues it gives about the Big Bang, the earliest phases of the cosmic history, and the structure and composition of the Universe. The Max Planck Institute for Astrophysics (MPA) in Garching has developed important software components for Planck and is getting ready to participate in the analysis and scientific interpretation of the mission data. According to the standard model of cosmology, our Universe began 13.7 billions years ago in a Big Bang, the origin of space and time. The Cosmic Microwave Background (CMB) is the relic heat from this Big Bang, released 380.000 years after beginning and still travelling freely through space today. At that early time, weak fluctuations of matter density were present, which are seen as variations of temperature in the CMB. By observing these fluctuations, cosmologists can infer how the large-scale structure of today’s Universe - galaxies, galaxy clusters and filaments - was formed. The Planck satellite will be placed at the second Lagrangian point of the Sun-Earth-Moon system (L2), located about 1.5 million kilometres away from the Earth - four times the distance to the Moon. The satellite will spin around its own axis, always point towards the Sun, with each rotation recording another strip of the sky and mapping the sky’s temperature to an accuracy of about one millionth of a degree. The data are sent to Earth and converted into temperature maps of the sky in data processing centres in France and Italy. What the maps look like depends on certain characteristics of the Universe, for example on the curvature of space. For hypothetic Universes with specified properties, computer simulations using the MPA software generate virtual maps which will be compared with maps of the real sky. "From the comparison we can draw conclusions about the structure of our own Universe, for example how much ordinary matter and dark energy exist in it", explains Torsten Enßlin, head of the Planck group at MPA. The physics of structure formation and the formation of galaxies will be studied via the so-called Sunyaev Zeldovich effect - the heating of CMB photons by scattering in the atmosphere of galaxy clusters. Due to this effect distant galaxy clusters become visible as "shadows" in front the cosmic microwave background. However the galaxy clusters are only the densest parts of the cosmic matter distribution. 85 percent of the cosmic matter remains invisible and dark. The composition of this Dark Matter is still not known. From their computer simulations, MPA cosmologists have shown how the CMB is influenced by the gravitational field of dark matter. The unseen structures of dark matter can therefore be deduced from temperature variations in the CMB. This requires the scientists to analyse the Planck data with statistical methods, obtaining important information on the structure and future development of the Universe. Moreover, the mission is expected to detect thousands of distant objects in a frequency range barely studied so far, and so to offer new insights into the physics of galaxies, active galactic nuclei and quasars in the submillimetre domain. These will show Planck scientists energetic processes in the immediate vicinity of massive black holes. Planck may also help us to understand the birth of the first stars in the Universe and the structure of our own galaxy, the Milky Way. "With the start of the Planck satellite a dream comes true", says Rashid Sunyaev, MPA director and pioneer of CMB research. "Planck will provide the most precise data on the early Universe ever. We have never been so close to the Big Bang." "We will understand the past of our Universe’s past and throw a glance at its future", adds Sunyaev’s colleague Simon White. "Will it keep on expanding for ever or some day collapse back upon itself? What is the nature of the mysterious dark energy causing this expansion? Planck will provide an answer to many important questions of cosmology. The satellite is the most powerful tool ever for studying the Cosmic Microwave Background developed."
https://www.mpg.de/590854/pressRelease20090511
The cosmic microwave background (CMB) is a gas of electromagnetic radiation left over from the Big Bang. It has a very precisely thermal frequency spectrum with a temperature T=2.725 K. The intensity is very nearly uniform across the sky, with variations of roughly one part in 100,000. The CMB radiation was emitted roughly 13.8 billion years ago, when the Universe was only 380,000 years old. Maps of the intensity of the CMB across the sky thus provide a snapshot of a spherical surface, of radius 14 billion light years, in the Universe when it was extremely young. NASA's Wilkinson Microwave Anisotropy Probe (WMAP), and (soon) the European Space Agency's Planck satellite provide high-signal-to-noise maps of the CMB intensity with angular resolutions a fraction of a degree. They thus provide an extremely detailed picture of the early Universe. By comparing the results of CMB measurements with theoretical models for the origin of the fluctuations, we have been able to derive a remarkably precise description of the early Universe, its evolution over time, and its contents. We now have a precise inventory of the contents of the Universe (ordinary atomic matter, dark matter, neutrinos, electromagnetic radiation, and dark energy), and we can map precisely the distribution of tiny primordial mass inhomogeneities that are seeds for the galaxies and galaxy clusters in the Universe today. The observations provide strong evidence in favor of inflation, a period of accelerated expansion in the ultra-early-Universe, that in some sense set the Big Bang in motion. Thus, the aim of CMB experiments now is to learn more about inflation. Specific targets in this effort include a particular pattern of CMB polarization, as well as characteristic higher-order correlations in the CMB temperature pattern. There are also opportunities to learn more about the later Universe, when galaxies and galaxy clusters were forming, by looking for distortions to the CMB image from gravitational lensing by the intervening matter distribution. In this talk I will sketch out how the contents and largest-scale structure of the Universe have been determined by CMB experiments. I will briefly explain why they suggest a period of inflationary expansion in the early Universe. I will then discuss the prospects for learning both about the early and later Universe with forthcoming experiments. This talk will serve as an introduction to later talks in the series that will explain how the experiments are done, document recent experimental progress, and present several new results.
https://aaas.confex.com/aaas/2013/webprogram/Paper8423.html
For the last three years, I have been conducting a photographic investigation of the life, culture, circumstances, traditions, and the new homes, settlements, and monasteries of the Tibetans in exile. To this end, I have traveled to Kathmandu, in Nepal; to Ladakh, in Jammu and Kashmir; and to Bir and Dharamsala, in Himachal Pradesh, India. Most of the Tibetan refugees live in India and Nepal, and among them are many Buddhist masters, who were urged to leave Tibet to save their lives and thus the precious teachings following the Chinese invasion in 1959. In January this year I visited Clement Town, a Tibetan settlement in Dehradun, Uttarakhand, India. Clement Town is home to the re-established Mindrolling Monastery (1965), one of the six main monasteries of the Nyingma tradition in Tibet and among the greatest Buddhist centers of this lineage remaining today. An unbroken lineage of great masters has been maintained at Mindrolling, continuing up until the present day. Mindrolling Monastery in Clement Town is a serene oasis, not in the ordinary sense of a place that makes you feel good, like a nice beach, but an oasis in the sense of a place of purity. It is our mind that makes and shapes our reality, and at Mindrolling, universal responsibility is taken very seriously in that a great many nuns, monks, and lay practitioners undergo serious mind training here with the intention of bringing order to the world. The monastery is thus an inspiring example of the practice of the pure and profound Dharma of Vajrayana Buddhism. It is also home to Ngagyur Nyingma College, where monks receive advanced training in order to preserve the unbroken lineage of teachings and pass them on to the next generation of practitioners. A unique symbol of this pure intention is the World Peace Stupa, which stands within the monastery compound. One of the largest in the world, the stupa is 185 feet high and 100 feet square. It is said that through seeing a stupa, one can attain liberation in just one lifetime. Like most such sayings this one is multilayered, but can be understood to mean that by seeing the beauty of a stupa either in person or in an image, one can be inspired to set foot on the path to liberation. The 11th Mindrolling Trichen (1930–2008) And beautiful this stupa is indeed! Inaugurated in 2002, it is unique in that one can access the interior. This contains five shrine rooms: to Padmasambhava, or Guru Rinpoche, who brought Buddhism to Tibet in the 8th century; to the historical Buddha, Shakyamuni; to the Ati Zabdön Nyingpo revelations of Terdag Lingpa (1646–1714), the original founder of Mindrolling; to the Mindrolling lineage; and to the Dzogchen teachings, the highest teachings in the Nyingma lineage of Buddhism. These shrines are thus authentic examples of Tibetan Buddhist history and philosophy, and their artwork is superlative. The World Peace Stupa in itself is a treasure of Buddhist architecture. A stupa is rich in symbolic meaning: it is said to represent the physical form of the Buddha as well as his journey to enlightenment, starting from the base and culminating in the jewel at the apex, which represents his awakening. Likewise, our own awakening is enabled through the triumph of discriminating wisdom over ignorance, the root of suffering. Hence the stupa represents both the motivation and the path to achieve a higher purpose of life, beyond competition and struggle, clinging and obsession, and helps orient our mind toward a freer, less ego-driven state. The details of the stupa’s symbolism are of course much more complex than this, indeed, as complex as the investigation, study, and eventual liberation of our own mind. In this regard, the design of this stupa with its interior shrine rooms is especially remarkable. Contained within the shrine rooms is the entire “cosmos” of the mind, represented by the diverse symbolic accoutrements of Tibetan Buddhism: statues, wall paintings, a three-dimensional mandala, and thangkas . . . in brief, the qualities these exemplify inspire us to work towards cultivating these same qualities within ourselves. The World Peace Stupa is thus a place of Dharma, teachings, empowerments, blessings, and prayer. It opens the door to the truth for all who want to hear it. When one exits the stupa and turns to look at it again one is struck by its grace and majesty, which impart a sense of magic. And both beauty and magic are qualities of love, which, again, represents the awakened mind. This love generates a deep gratitude for the noble intent of Mindrolling Monastery and for the beauty of life itself. It could not be better expressed than in the words of the female master Mindrolling Jetsün Khandro Rinpoche, the daughter of the 11th Mindrolling Trichen (1930–2008) and a holder of the lineage of Jetsunmas descending from Terdag Lingpa’s daughter herself: “Gratitude spreads happiness. So open your eyes and look around the richness of creation around you. You are here now today. You are amidst this wondrous display. In silence and grace spend a moment to feel the interconnectedness between yourself and the entire universe. And whenever you experience this grace flow within you, you will find gratitude.”* * Her Eminence Mindrolling Jetsün Khandro Rinpoche Facebook See more Mindrolling Monastery Victoria Knobloch - Tibetans-in-exile Victoria Knobloch is a member of Dharma Eye—The Buddhist Photographer Collective. To learn more about Dharma Eye and Victoria’s work as a photographer, visit Dharma Eye’s website.
https://www.buddhistdoor.net/features/mindrolling-monastery-and-the-world-peace-stupa
Official recognition is also given the Jonang lineage, once though to have been entirely absorbed by the Kadampa. Also, since the ancient pre-Buddhist Bon (pron. beun) tradition has been greatly influenced by Buddhism over several hundreds of years, it is sometimes included as a fifth "school." In Buddhism generally, people do not actively engage in trying to convince others to abandon one denomination for another. Also, it is considered a serious breach of ethics to disparage another's affiliations. There are also strong ties linking the various denominations. For example, Je Tsongkhapa, the great reformer of Kadampa who founded the Gelugpa denomination, had many connections with the Kagyu lineage. He took layman's vows of renunciation from the 4th Karmapa, who prophesied that Tsongkhapa would glorify the Buddha's teachings. When he first began his quest for Dharma, he stopped at the Drikung Kagyu Monastery where he studied the works of Kyobpa Jigten Samgon. The Nyingmapa (the elders, -pa means man or person) are the oldest denomination whose tradition is said to be unbroken having originated with Padma-sambhava, called Guru Rinpoche. The Book of the Dead, called in Tibetan, Bardo Teudol is a Nyingma text. These lamas may be celibate or married. One well-known Nyingma lama was the late Dilgo Kyentse. Another was Dudjom Rinpoche. He was succeeded by Chagdud Tulku. Read about Jacob Leschly's experiences with famous Nyingmapas. The Nyingmapa, Sogyal Rinpoche, is the lama whose teachings are found in the popular Book of Living and Dying, and who founded the Rigpa organization. The Sakya, and also the Kagyu, date from the early 11th century. That was a time of renewal when the Bengali teacher, Atisha arrived to help Buddhist Tibetans who had suffered a period of repression. Sometimes people distinguish among the denominations with reference to the lamas' hats. The headdress of prominent Sakya lamas superficially resembles a kind of turban. In celibate Sakya lineages the tradition descends from uncle to nephew, but that practice is also found in some the other groups. The reformer Tsongkhapa established the Gelugpa order from the Kadampa, a Nyingma sect. He imposed celibacy as one of this denomination’s requirements. The Dalai Lama is the leading public figure of this denomination. He is not the head of the order (the abbot of Ganden occupies that position) but he is certainly the most widely recognized Tibetan public figure. The Panchen Lama is another pre-eminent Gelugpa leader. Chokyi Nyima is currently being concealed or held in detention by the government of the People's Republic of China, which has designated its own Panchen. Geshe Kelsang Gyatso holds a view on an essential matter that is in contradiction with that of both the Ganden Tripa, who is the actual traditional head of the Gelug denomination, and the Dalai Lama, who is the most famous and beloved. He has established a New Kadampa sect but it cannot be called a tradition. The Tibetan syllable that we write as Ka means oral transmission (here, of the words of Buddha.) It means that the teachings of Buddha are transmitted directly from one person to another by word of mouth. Gyu means lineage. Therefore the main characteristic of this denomination is that it is an unbroken oral transmission of Buddha's teaching. The Tibetan word can also appear as Kar.gyud. Then it signifies "white lineage," and that is how Chinese people refer to it, but this is not the original sense. The Kagyu denomination is the lineage of Gampopa, the student of the Tibetan yogi, Milarepa, who is venerated by all Tibetan sects. His teacher Marpa, was one of the intrepid voyagers who, in the 11th century, traveled a number of times to India to receive authentic teachings from Naropa and other great masters. It is the Kagyu denomination that established the custom of searching for reincarnations of deceased masters based upon the predictions of the established teacher, his or herself. Those people, usually children when they are found, are called tulkus. The Kagyu are also famous for the Black Hat ceremony performed by the head of the order, the Karmapa, and for their reputation as masters of so-called magical arts such as long distance striding and the generation of internal heat (tummo), as well as the manufacture of special pills with unusually beneficial qualities. Since the Karma Kagyu is in the direct line of oral and written transmission from Gampopa, whose name refers to his native province, Kham, in East Tibet, many Kagyu lamas are from this region and so the pronunciation of the liturgy is with this Tibetan accent. The designation, Karma, refers to the fact that this is a practice lineage, and also that the Karmapa is a bodhisattva who is active in the world. In the 19th century, there was a trend popularized by the Jamgon Kongtrul Lodro Thaye (1813-1899) a Kagyu leader. Ecumenical in nature, it allows people to follow more than one tradition. Rimay (ris-med, following the Tibetan spelling) has broadened perspectives and probably contributes much to the solidarity of Tibetans and of Buddhists in general. Another great rime leader was Jamyang Khyentse Wangpo. Chojor Lingpa is also considered one. Jamgon Mipham's "Satirical Advice on the Four Schools" Many lamas teach and practice more than one denomination, and some also hold Bon traditions at the same time. Also, many Western teachers are holders of the teachings of more than one Buddhist path. What is considered important is not to mix and confuse lineages in the minds of students. Ven. Deshung Rinpoche (1983) & the denomination superiority complex. man: Women play an important role in Buddhism, particularly in its Tibetan expression. -mo added to a name indicates a female, so it is possible to use it as a suffix instead of -pa, but like most other languages, Tibetan generally uses the male term as all-inclusive. Tibetan Book of the Dead: Like ancient Egyptian funerary texts, there is not really one single ancient book, but many different oral traditions that were later written down. However, they are all variants of some fundamental views, beliefs and descriptions. Tripa: The current Ganden abbot or Tripa is 100th in the line of the supreme spiritual authorities of the Gelugpas. He is 74 years old and his seat, outside Tibet, is at the Drepung Monastery in South India.
http://khandro.net/TibBud_denominations.htm
One of my first experiences with lineage was in 1997, in New York City, at the end of a three day teaching on the Perfection of Wisdom Sutras and Buddhist Refuge with His Holiness the Dalai Lama. At the end of the teachings, His Holiness and an entourage of about twelve Tibetan monks were packing up to leave. I remember remaining in the theater, watching them until the last, and walking outside to see them exit the stage doors with their belongings, get into cars, and drive away. One of them saw me watching and stopped, looked deeply and kindly at me, and smiled. Then he continued loading up the cars. As they pulled away from the theater, I was crying. I remember a distinct, powerful, and quite surprising longing that arose as foremost in my mind and heart at that time: I wanted so much to go with them. Actually, I felt as if I should be with them. This was quite disorienting—like that feeling you get when you make a wrong turn somewhere and suddenly realize that you need to go back and turn left instead of right. The longing was visceral and physical, and completely illogical. I was 24 years old, I had just graduated university and returned to New York from living abroad, and I was working as a bartender in Manhattan trying to figure out my next move. In retrospect, this was an early experience with a connection to a Buddhist lineage. Formally, a Buddhist lineage is a line of transmission of the Buddhist teaching that is traced back generations to Buddha himself. There is a great deal of conversation, debate, and discussion about lineage in Buddhism, and in Tibetan Buddhism it is particularly important. The Tibetans pride themselves on having maintained unbroken lineage transmissions of many teachings and practices directly from enlightened beings, a reality that, in the minds of the Tibetan scholars and practitioners, ensures the authority and authenticity of the tradition. The particulars of the validity of this position are many and are far beyond the scope of this piece; instead, here I would like to share a little bit of my personal experience with lineage, as a modern student of Tibetan Buddhism in contemporary America. I think it is quite possible to find formal, meaningful, deep historical and philosophical views on the issue from many reliable sources, so I will focus on speaking mostly from my own practical experience. The first time I went to visit my heart teacher, Yangsi Rinpoche, at Deer Park Monastery (after having met him in Europe the summer before and immediately deciding to move to Madison, Wisconsin, where he was living, to study with him), I remember walking around the stupa with Rinpoche at Deer Park, talking about something or other, and having a strong feeling of being embraced, and cherished. It was almost as if I were being physically held, although no one was touching me at all. “This is what it feels like to really be cared for,” I thought. Over the years, I have watched this same sense of love and care come alive between Yangsi Rinpoche and his own teachers. These relationships have been among the most extraordinary teachings of my life. An American scholar, Nel Noddings, writes about ethics and care theory in education and communities of care in schools. Her work focuses on the significance of caring and relationship as both a goal and a foundation of education. She centers caring rooted in awareness of relationship as the foundation of effective pedagogy, and stipulates that it is the caring itself that acts as the primary condition for the development of ethical behavior in the student. Although I have never heard of a formal philosophy of care from a Tibetan teacher, in my experience, a similar dynamic of caring is central to the concept of lineage in the tradition. Some contemporary Buddhist scholars speak of the way that Tibetan Buddhist depictions of Buddhas and lineage masters embody strikingly loving facial expressions, and I have heard my own teachers speak of the way that a loving mentor can make the difference between success and failure in the learning, and especially in the behavior, of a young Tibetan monk. According to Noddings and others, the caring that a teacher demonstrates for a student functions as a catalyst for their own sense of moral awareness to arise. In Noddings’ assessment, it is the responsibility of the carer to care for the part of the student that cares for others. On that basis, she asserts, the students learn how to become people who care themselves. When we care for others and are cared for, says Noddings, “perhaps the first thing we discover about ourselves is that we are receptive; we are attentive in a special way.” This is the beginning of becoming a caring individual. From Noddings’ perspective, the caring is the fire of the moral individual, who wishes to engage with others in the world in a way that helps them. From my experience of the Tibetan Buddhist perspective, this is exactly the role that lineage holds in the education of the student. The lineage of teachers stretching all the way back to Buddha is recited formally by young students in monasteries on a daily basis before they begin their studies. This piece is so significant in these communities that in order to even enter and take one’s place as a student on the debating courtyard in Sera Monastery, the students must memorize the names of the lineage masters and recite them for the Abbot without mistake. Although the majority religious traditions in the West lack an indigenous invocation of lineage as generational interpersonal transmission, in most Buddhist communities in the western world, there is an attempt to integrate the recitation of lineage, at least in the context of major ceremonies and teachings, into our spiritual practices. The purpose of such recitation in the Buddhist tradition, I have been told, is to connect the student of this moment, of this time, with the unbroken succession of other minds that have also sought to learn, understand, and embody the teachings of Buddha. I have not seen a logical proof of the significance of lineage in learning Buddhism yet (although I have no doubt that one exists somewhere), but, relying on confidence in a way of knowing that is mostly unfamiliar to people from a modern North American background, it is safe to say that the opening of the mind with the intention to connect, offer respect, and appeal for inspiration to those who have come before does result in chinlab, which is usually translated as “blessing.” Interestingly, according to my teachers, the actual meaning of chinlab is “transformation,” which is exactly what occurs. The lineage prayer, as it exists in the Tibetan tradition, is composed mostly of a series of names, with the words, “I offer homage to…” followed by the teacher’s name. The tunes of the chants are extremely beautiful, sweet, and longing, and upon hearing or singing such melodies, I myself have felt my chest area physically expanding. The feeling evoked is of almost unbearable love. In my own experience, when I meet the gaze of a finely-crafted image of an enlightened being or lineage master, or when I look into the eyes of one of my own precious teachers, this is the feeling that is most apparent—an overwhelming sense of love, care, and cherishing. In the back of my mind is the awareness that I am connecting to a shared aspiration that has been cultivated with strong determination, with great personal sacrifice, and in spite of many obstacles, over generations and centuries, to seek happiness for every living being, without exception. I know that this intention has been held in the minds of so many, for so long, that it has been protected, nurtured, and cultivated, and that, due to the incredible love of these individuals and their vast sense of care for everyone around them, I have the fortune to receive these instructions and practice myself, and in turn learn to care more completely for others. Like Nel Noddings asserts, the sense of being cared for inspires me to increase my kindness, my compassion, my sense of help over harm, and my determination to act in the world on that basis. Perhaps this is the essence of chinlab. Perhaps this speaks to the actual meaning of the word, transformation, which leads to action. In any case, to close, with a mind of loving attention and an open heart, I offer homage to the lineage teachers.
https://maitripa.org/lineage-love-transformation-action/
Biography of His Eminence Chonyi Gyamtso Rinpoche His Eminence the 14th Chonyi Gyamtso Rinpoche was born to a prominent Gelukpa family in 1978, in Ninglang county, Lijiang, Yunnan, China. He received a conventional education and attended Kunming Medical School in 1996. However, he renounced worldly life in 1997. He was recognized by HH Chamgon Kenting Tai Situ Rinpoche as the reincarnation of the 14th Orgyen Lungzin Chonyi Gyamtso Rinpoche, and was later acknowledged by the lineage masters and the State Administration of Religious Affairs. Afterward, he did extensive study and retreat in the Kagyu tradition for many years. On October 12, 2008, he was enthroned at his main seat Kang-Pu-Shou-Guo Monastery (Shou-Guo Monastery) (the national temple of well-being and long life) in Weixi, Yunnan. Thousands of monks and tens of thousands of devotees attended the ceremony. The lineage of Kyabje Chonyi Gyamtso Rinpoche holds the traditions of both Dzogchen and Mahamudra. The first Tulku Chödpa Lugku (the lamb who practices Chod) was a disciple of Machig Labdrön, the 11th-century yogini who established the Chod lineage in Tibet. The fifth to the ninth reincarnations practiced within the Dzogchen Nyingma Lungzin tradition. HH Karmapa and HH Chamgon Kenting Tai Situ Rinpoche invited the 9th Chonyi Gyamtso Rinpoche, who was at Gatuo Monastery at that time, to be the abbot of Shou-Guo Monastery, the foremost Kagyu monastery in northwest Yunnan, which has been his main seat up to now. HE Chonyi Gyamtso Rinpoche’s root guru is HH Chamgon Kenting Tai Situ Rinpoche. He has also followed the revered and accomplished 82-year-old Lama Garshi, who is the heart son of the great master Dzogchen Garnou from Kham, to practice the Six Dharmas of Naropa. As to the teachings of Geluk tradition, Rinpoche has relied on Traga Rinpoche, who is the tulku of Litang Monastery in Kham and is now over 90 years old. He has studied and practiced the Nyingma tradition with the great accomplished master Chatral Rinpoche. After his enthronement in 2008, Kyabje Chonyi Gyamtso Rinpoche prostrated to the pilgrimage site of the four sacred mountains in mainland China three times, and to Jizu Mountain six times. On August 8, 2009, he led more than 100 disciples from Taizi Snow Mountain in Yunnan, and prostrated to the Jokhang Temple for more than 2,100 kilometers in 11 months, which was an unprecedented event. While keeping to traditional Dharma topics, Rinpoche uses modern colloquial language, examples from daily life, and humor in his own refreshing presentation of the Dharma. Presenting Mahamudra as the ground, Mahamadhyamika as the path, and Maha-Ati as the fruition, he discusses the essentials of the Buddhadharma, communicating the profound points of the doctrine in a simple and easy-to-understand manner.
https://palpungny.org/biography-of-chonyi-gyamtso-rinpoche/
The Changling Tulkus Changling Rinpoche teaches Lotus Speech students practices drawn from several Nyingma and one Kagyu lineage of Tibetan Buddhism. Starting with the profound foundation teachings and practice, students practice progressively according to advice and teachings given to them by Changling Rinpoche and their practice inclinations. The Changling Tulkus and the Rechung Kagyu Lineage Tsang Nyon Heruka was from Rechungpa’s lineage of students and is known as an emanation of Rechungpa. He was the Kagyu master who collected Milarepa’s life story. Tsang Nyon Heruka’s second purpose was to restore the Rechungpa teachings of the formless dakini. This is a strict teaching lineage – the lineage holders do not give public teachings but teach only a few select students. Tsang Nyon Heruka had many students, but his heart student was Gotsang Repa Natsok Rangdrol. Gotsang Repa in turn had many students – of his two main students, one was Gothukpa Sangye Dorje. He is regarded as the incarnation of Tsang Nyon Heruka and was the first Changling Rinpoche, who Tibetans from Tsang called ‘Lama Rechungpa.’ This first Changling Rinpoche and the first Dalai Lama were contemporaries. Some subsequent Changling incarnations died very young. There have been fifteen incarnations altogether. The Changling tulkus are regarded as the lineage holders of the Rechung Kagyu. The previous Changling Rinpoche wrote many commentaries on the Rechung Kagyu teachings – even the renowned Jamyang Khyentse Wangpo came to his monastery to receive the Rechung Kagyu lineage from him. The fourteenth Changling Rinpoche became more involved in the Nyingma tradition and thus had two types of students: Kagyu and Nyingma. The fourteenth Changling incarnation died at around 50 years of age. When the fourteenth Changling Rinpoche passed away there were two incarnations. One is the present Changling Rinpoche at Shechen Monastery in Nepal and the other is still in Tibet. There is still a Kagyu group and a Nyingma group of students: the current Changling Rinpoche was brought up by the Nyingma group. In Tibet there were two main seats of the Rechungpa lineage. One was Rechung Phug and the other was Changling. Changling is in the Shigatze district, between Shigatze and Sakya. Sakya Ngor monastery and Changling monastery are separated by one big mountain. The Kagyu practiced in this lineage is the Rechung Kagyu tradition. Until now this has not been taught in any western country. The practice of the Rechung Lineage Rechungpa lineage is a strict and subtle lineage. It has long been a secret lineage. Like his predecessors, Changling Rinpoche has started teaching the Rechung lineage practices to interested students. These Rechungpa teachings are rare in the East and West. In June 2009 Changling Rinpoche taught from the Rechung lineage at Rechung Gar, a practice retreat held on Vancouver Island, British Cloumbia. This Rechung Gar retreat is the first time the Rechung lineage has been taught in the West. Changling Rinpoche’s teachers requested him to do this. Before Changling came to give these teachings, his monastery did four days of feast offerings to dakinis and others to get permission to give these teachings openly. Changling Rinpoche received these teachings from Shechen Rinpoche, who received them from Changling’s previous reincarnation. Read about the history of the Rechung Kagyu Lineage. The Changling Tulkus and the Northern Treasures Lineage Since the fifteenth century, the line of Changling Rinpoches have practiced and maintained the Northern Treasures Lineage of Buddhist teachings. The Northern Treasures were the last teachings given by Padmasambhava before he left Tibet. After giving them, he hid them for a future generation. In the late 14th century CE, Tulku Zangpo Drakpa found the famous Prayer in Seven Chapters. This he gave to Rigdzin Godem for whom they were the key to finding the main body of teachings. Later branches were recovered by Tennyi Lingpa (15th c.) and Garwang Dorje (17th c). Among its many famous teachings is the Gonpa Sangtal, one of the most sublime works on The Great Perfection among Padmasambhava’s vast teachings. Decades ago, when the Chinese sought to occupy Tibet, many fled their homeland. These precious Treasures were dispersed and dwindled. Until now. The Northern Treasures Buddhist Fellowship is a newly created nonprofit organization dedicated to the preservation and practice of the treasures. The texts are being collected and translated. The oral teachings are being gathered, preserved and offered as courses. Read more about the Northern Treasures Text Archives project. Rigdzin Godem and his successors Rigdzin Godem withdrew the Northern Treasures from concealment in 1366 CE. Rigdzin Godem’s son Namgyel Gompo, his student Gompo Dorje, and his consort were his main students. From these, flowed out three streams of teachings and practice until the time of the fourth Rigdzin Godem tulku, Pema Trinley who united these into one stream of practice. Before Dordrak Rigdzin Chenpo Pema Trinley united the three main practice traditions, the Changling Tulkus were head of the branch which originated from Rigdzin Godem’s wife. The eleventh and twelfth Changling Rinpoches engaged extensively in the Nyingma Northern Treasure practices and established the Northern Treasure tradition in Changling monastery. The lineage carried on by Shechen Changchub Ling and Changling Rinpoche is the Northern Treasures tradition. Read the history of the Northern Treasures Lineage.
http://www.lotusspeech.ca/lineages/changling-tulkus/
"I have great confidence that Khenpo Tsering Dorje will benefit many beings as a spiritual guide and teacher. Therefore i request all concerned to revere him as a religious master and assure that anyone who have dharmic connection to Khenpo will be benefited in this and the life after." -- Kyabje Drubwang Penor Rinpoche ( Supreme Head Emeritus of the Nyingma Lineage of Tibetan Buddhism ) ____________ ____________ Khenpo Tsering Dorje's Seat in front of the Throne of Kyabje Penor Rinpoche with the assembled hosts of Nyingma Sangha in Taiwan. ____________ ____________ Demonstrating great compassion towards all living creatures since birth, steadfastly refusing to harm even an insect, Khenpo Tsering Dorje was sent by His mother to the late Dudjom Rinpoche for Buddhist training and ordination. Before Dudjom Rinpoche was about to leave for During Khenpo's studies in Namdroling, Khenpo topped His standard annually. In His 18 years with Kyabje Penor Rinpoche, Khenpo completely received all the Nyingma initiations and transmissions. Khenpo has also debated and given discourses in the presence of HH the Dalai Lama who expressed His approval and happiness. Khenpo Tserng Dorje is, in fact, the seniormost Khenpo who is the first batch of student to graduate from the main seat of the Nyingma Palyul Lineage in-exile, the Namdroling Monastery, in Khenpo was subsequently invited by the late Dilgo Khyentse Rinpoche to His main seat in-exile, Shechen Monastery, to teach, where He taught for 4 years to Khyentse Rinpoche's students. Khyentse Rinpoche showed His approval and delight for Khenpo's teachings which He felt brought great benefit to His students and the monastery. After His teaching at Khyentse Rinpoche's monastery, Khenpo was requested by Kyabje Penor Rinpoche to return to Whilst in Considered fully accomplished, attaining complete inseparability with Guru Padmasambhava Himself, Khenpo Achuk Rinpoche, with Khenpo Jigme Phuntsok ( one of HH the Dalai Lama's Nyingma Teacher ), is known as Tibet's 'sun and moon'. Khenpo Tsering Dorje is the repository of countless of Tibet's Dharmic treasures, having received them from the greatest luminaries of this century including HH the Dalai Lama, the late Dilgo Khyentse Rinpoche, the present Sakya Trizin, Kyabje Penor Rinpoche, the late Khenpo Jigme Phuntsok, Do Drupchen Rinpoche and other such realized Masters. Whilst a pillar and master of the Nyingma tradition, Khenpo is also a beacon of the 'Ri-May' or Non-Sectarian movement. He consistently, not least of all, through His personal example, taught His students to offer nothing but the highest devotion and respect to teachings of all lineages as they are all equal in taste the taste of Liberation, all the holy Dharma taught by the Buddha. Being from Pema Kod, HH the Dalai Lama and Kyabje Drubwang Penor Rinpoche have both graced Khenpo Tsering Dorje's monastery in the far-flung region of ____________ The Initiations of the Outer and Inner Guru Padmasambhava as well as Takhyung Barwa aby Khenpo Tsering Dorje will be held at Katong Student Hostel, Auditorium. Website: http://www.katongho For the INITIATIONS, registration is NOT required and NO commitments is required for students taking it as a blessing. ____________ Registration IS required for the teaching sessions on the Outer and Inner Guru Padmasambhava and Takhyung Barwa and the venue will be at Camden Education Center -- CAS's affiliated education center -- unless informed otherwise. The address of Camden Education Center is available at www.camden.edu.
http://www.casotac.com/CASonline%20Articles/23052007.html
About Pathgate Institute. Teaching Faculty Pathgate News Pathgate News - Special Features Teaching Schedule Advice of H.H. Penor Rinpoche The Essence of Buddhism Advice to Pathgate Students - 2005 The Attitude to Take for Overcoming Natural Disasters Guru Yoga Audio Teachings by Lama Dondrup Dorje Rinpoche Chronicle of the Buddha, Bodhisattva and Deity Biographies of Palyul Lineage Masters Biographies of Other Realised Masters Gateway to Dharma Gateway to Tibetan Buddhism Gems of Wisdom Dharma Tales Living Dharma Buddhist Medicine and Healing Dharma, Qigong & Martial Arts Introduction to Qigong Introduction to Chinese Martial Arts Pathgate Theatre Martial Arts Theatre Dharma Theatre The Life of Guru Rinpoche Pathgate Gallery Palyul Lineage Holders H.H. Penor Rinpoche Grand Anniversary Puja for HH Penor Rinpoche at Namdroling Monastery Namdroling Monastery Losar Celebrations at Namdroling Monastery Saga Dawa Gallery Buddhist Art Ryun Namkoong Family Collection Prayer and Mantra Dharma Texts Pathgate Partnership Programme Make an Offering Buddhist Practice Dates Links Contact Us Language Selection: English , Ελληνικα/Greek , Italiano , Romanian , Polski, 中文/Chinese ,
https://pathgate.org/index.php
One of the three Heart Sons of HH Penor Rinpoche, the Fourth Gyangkhang Rinpoche was born in India. H.H. Penor Rinpoche, H.H. Dudjom Rinpoche and H.H. Dilgo Khyentse Rinpoche recognized him as the authentic incarnation of the Third Gyangkhang Rinpoche. At the age of four, His Holiness Penor Rinpoche invited him to stay in Namdroling Monastery and thereafter looked after him as his own child. When he was 15, he received the esoteric teachings of Tantra thus laying the foundation for the higher practices. He entered Ngagyur Nyingma Institute (Shedra), a branch of Namdroling Monastery and an advanced college of Tibetan Buddhism where he studied general Buddhist Philosophy as well as the distinct Nyingma Teachings. He also studied Tibetan grammar, poetry, and Tibet’s political and religious history. In his fifth year in his university, Gyangkhang Rinpoche displayed his scriptural knowledge by giving a lengthy discourse on Sangwa Nyingpo (Magical Net Tantra, a Mahayoga Teaching) in front of His Holiness The Fourteen Dalai Lama and an assembly of thirty thousand people both lay and ordained from all the different religious traditions of Tibet. He also debated on this subject and emerged victorious. His Holiness the Dalai Lama praised his wisdom in front of all and offered him a khata, a traditional scarf, encouraging him to hold, sustain and disseminate the teachings of Lord Buddha for the benefits of all beings. In 1994, under the Bodhi Tree in Bodh Gaya, Gyangkhang Rinpoche received the vows of full ordination from His Holiness Penor Rinpoche. At that time, he was given the name Thupten Mawai Nyima Jigme Singye Chogle Nampar Gyalwai De. In 1995, he completed the nine years course in the institute and considering his high level of knowledge, Penor Rinpoche bestowed on him the title of a “Tulku Khenpo” and recognized him as a worthy dharma teacher who can guide others towards the path of enlightenment. At present, Gyangkhang Rinpoche is the Abbot of Namdroling monastery in South India.
https://palyulottawa.org/teachers/h-e-gyangkhang-khentrul-rinpoche/
World’s largest Dorje Shugden shrine Dear friends from around the world, Here at the beautiful Wisdom Hall in Kechara Forest Retreat stands the Enlightened World Peace Protector Dorje Shugden and his complete entourage of 32 assistants. This statue of Dorje Shugden stands at 25 feet (7.3 meters) in height, which makes this image the largest Dorje Shugden statue in the world. Surrounding Dorje Shugden, elevated on the left and right side of the wall is our holy lineage lamas who are closely connected to us. The lineage lamas are extremely important as they bestow blessings and attainments to sincere practitioners. When we engage in tantric practice according to the liturgy, we have to recite the names of each lineage lama to invoke upon their blessings daily. According to Buddha Vajradhara, practitioners will not be successful in gaining any realisations or attainments from their practice without the root and lineage guru’s blessings. This is why we venerate, pay homage and invoke upon the holy lineage lamas in our prayers to bless us. Wisdom Hall is Kechara’s chapel to Dorje Shugden and was created as such. From the pictures below, you will see a statue of Buddha Shakyamuni. This statue has been placed here temporarily and will be moved to the main prayer hall, when it has been built at the front of Kechara Forest Retreat land. Here are some beautiful pictures of Dorje Shugden as well as our lineage lamas from many different angles taken by our talented Pastor Loh Seng Piow which I thought I’d share with everyone so you can rejoice. Tsem Rinpoche Wisdom Hall The Facts - Largest Dorje Shugden in the world - Height: 25 feet (7.3 meters) - Material: Copper - Statue construction: Six months - Contents: Holy texts, trillions of Buddha images, prayers, mantras - Location: Wisdom Hall (the Dorje Shugden chapel at Kechara Forest Retreat, Malaysia) Lineage Lamas on the left Duldzin Drakpa Gyeltsen was one of the eight main disciples of Lama Tsongkhapa, founder of the Gelug lineage of Tibetan Buddhism. His name ‘Duldzin’ is the short form of ‘Dulwa Zinpa’ which translates as ‘Holder of the Vinaya’. This was because he was well known for his strong adherence to the Vinaya, or monastic vows. He made a promise to arise as an uncommon protector of Lama Tsongkhapa’s exposition of Nagarjuna’s Madkhyamika (Middle Way philosophy). In a later incarnation he would arise as Dorje Shugden to fulfil this promise. Tulku Drakpa Gyeltsen was a later incarnation of Duldzin Drakpa Gyeltsen. He was a high lama, student of His Holiness the 4th Panchen Lama Panchen Lobsang Chokyi Gyeltsen and a contemporary of His Holiness the 5th Dalai Lama. Reminded of his promise by the worldly Dharma protector Nechung, he fulfilled his earlier promise and arose as Dorje Shugden, at the moment of his passing. His Holiness the 10th Panchen Lama was a great practitioner of Dorje Shugden from Tashi Lhunpo Monastery and a contemporary of the current 14th Dalai Lama. He composed prayer texts to Dorje Shugden as contained within his Sungbum, or collected works. His line of incarnations stem back to the time of Buddha Shakyamuni and are considered to be emanations of Amitabha. His Holiness Trijang Rinpoche was the junior tutor of His Holiness the 14th Dalai Lama, and as such he is either directly or indirectly the guru of all the High Lamas within the Gelug lineage of Tibetan Buddhism. His line of incarnations stems back to Chandra, the charioteer of Buddha Shakyamuni. He is known for his composition of the text entitled ‘Music Delighting the Ocean of Protectors’ which details the complete history, practices and rituals of Dorje Shugden. His Holiness Zong Rinpoche was a student of His Holiness Trijang Rinpoche and a great practitioner of Dorje Shugden. Known for his mastery of Buddhism he was an abbot of Gaden Shartse Monastery in Tibet, he was also the root guru of His Eminence the 25th Tsem Rinpoche. He was a great lineage master renowned for his debating ability. It is said that through his skill in debating he could even convince a person that the colour black was white, or the other way around. Lineage Lamas on the right Panchen Sonam Drakpa was an illustrious high lama and a previous incarnation of Dorje Shugden. He was also a student of the 2nd Dalai Lama, Gendun Gyatso and the guru of the 3rd Dalai Lama, Sonam Gyatso. He served as Abbot of Gyuto Tantric College, Ganden, Sera and Drepung Monasteries. He also served as the 15th Gaden Throneholder, head of the Gelugpa lineage. No other lama has been able to repeat the same feat. Monks at Drepung Loseling and Gaden Shartse still study his texts books for their Geshe examinations. His Holiness Tagphu Dorje Chang (a.k.a Tagphu Pemavajra) is a guru of Kyabje Pabongka Rinpoche. He is known to have had pure visions of the Buddhas and the ability to astral-travel to pure lands like Tushita. As requested by Kyabje Pabongka Rinpoche, Tagphu Dorje Chang ascended to Tushita and acquired the life-entrustment initiation and teachings of Dorje Shugden from Lama Tsongkhapa and Duldzin Drakpa Gyeltsen. He descended and passed the initiation and teachings to Kyabje Pabongka Rinpoche, who proliferated the practice. His Holiness Pabongka Rinpoche was one of the most influential Gelug lamas of the 20th Century. He taught extensively on the Lamrim – stages on the path to enlightenment teachings. He was also known for proliferating the Vajrayogini and Dorje Shugden practices as well. In fact, he wrote a whole new Vajrayogini sadhana by combining elements from the Sakya and Gelug traditions and also compiled a Dorje Shugden fulfillment text called ‘Melodious Drum Victorious In All Directions’. Kensur Jampa Yeshe Rinpoche was an erudite scholar and master who was awarded the Lharampa Geshe degree (equivalent to a PhD degree in Buddhist studies) in the presence of the Dalai Lama and many other high lamas and Geshes. He was then appointed the Abbot of Gaden Shartse Monastery for a period of seven years. His Eminence Tsem Rinpoche met this lama and served as his attendant before receiving his formal recognition as a Rinpoche. He had a well-respected reputation for being a selfless bodhisattva. His Eminence Gangchen Rinpoche is widely believed to be the emanation of the Medicine Buddha and his line of incarnations include the mahasiddha Darikapa. He is well-known for his powerful clairvoyance and unconventional mahasiddha-like behavior. Gangchen Rinpoche was said to have spontaneously recognized Tsem Rinpoche upon meeting him in Nepal, which led to his formal recognition in the monastery. Gangchen Rinpoche has also given much personal advice and teachings to Tsem Rinpoche. Visit Us! Based in the heart of Bentong, Pahang, Kechara Forest Retreat (KFR) is a 35-acre retreat centre like no other. Set in the midst of a lush Malaysian tropical forest, it is a spiritual sanctuary where you can develop a perfect balance of total wellness. A haven away from hustle and bustle of modern city life, KFR offers comfortable accommodation, extensive facilities and the promise of peace and tranquillity. Conceptualised by His Eminence 25th Tsem Rinpoche, KFR is dedicated to a sustainable and spiritual lifestyle that leaves the mind and body rejuvenated. For those who are interested, KFR is a place of spiritual practice and inner development, following the tradition of Lama Tsongkhapa. It is at KFR that one can discover inner peace, a perfect getaway to discover oneself and gain inspiration. Click HERE for more information about Kechara Forest Retreat Lama Tsongkhapa’s mantra at Kechara Forest Retreat Or view the video on the server at: https://video.tsemtulku.com/videos/LamaTsongkhapaMigtsema.mp4 The video above contains a chant of the Migtsema mantra of Lama Tsongkhapa. The origin of the five-line Migtsema mantra bears testament to Lama Tsongkhapa’s pure guru devotion towards his lama, Rendawa Zhonu Lodro, a Sakya master. The Buddha of Wisdom, Manjushri, first uttered the praise and Lama Tsongkhapa dedicated the verse to his lama. However his lama changed the words around and rededicated it back to Lama Tsongkhapa. This verse of praise became known as the Migtsema mantra. The Migtsema mantra praises Lama Tsongkhapa as the embodiment of Manjushri, Avalokiteshvara and Vajrapani, who represents transcendent wisdom, compassion and skilful means. The Migtsema mantra contains the embodiment of the body, speech and mind of Lama Tsongkhapa in the form of sound. Therefore, when we recite the Migtsema mantra, the sacred reverberations of the mantra pervade our body and the surrounding area, spreading the blessings of the wisdom, compassion and skilful means of Lama Tsongkhapa. World’s Largest Dorje Shugden in Kechara Forest Retreat Or view the video on the server at: https://video.tsemtulku.com/videos/WorldsLargestDorjeShugdenInKecharaForestRetreat.mp4 The Beauty of Kechara Forest Retreat Or view the video on the server at: https://video.tsemtulku.com/videos/TheBeautyOfKecharaForestRetreat.mp4 Manjushri Guest House @ Kechara Forest Retreat Dream Manjushri Arrives @ Kechara Forest Retreat Or view the video on the server at: https://video.tsemtulku.com/videos/DreamManjushriArrives.mp4 For more interesting information: - The Tsongkhapa category on my blog - The Dorje Shugden category on my blog - Largest Dorje Shugden in the world - 700 Meet A Buddha (七百人幸睹佛现) - Wealth Box Puja - Main Assistants of the Dharma King - A good friend to have - Sneak Peek of Kechara Forest Retreat - 3rd Pastor Ordination at Kechara Forest Retreat! - KFR in China Press - KFR featured on National TV’s meditation programme Please support us so that we can continue to bring you more Dharma:
https://www.tsemrinpoche.com/tsem-tulku-rinpoche/kechara-13-depts/worlds-largest-dorje-shugden-shrine.html
← Předchozí Celá kapitola Další → Hlavní výklad ze Swedenborgových prací: The Inner Meaning of the Prophets and Psalms 139 Příběhy: Duchovní témata: give Like other common verbs, the meaning of "give" in the Bible is affected by context: who is giving what to whom? In general, though, giving... hand In Genesis 27:22, 'voice' relates to truth, and 'hand,' to good. High 'Height' signifies what is inward, and also heaven. strip As in Micah 1:8, 'to be stripped' signifies being without goods, and 'to be naked' signifies being without truths. clothes 'To rend the garments' signifies mourning for truth lost or destroyed, or the loss of faith. jewels Jewels' when applied to the ears, signify good in act. naked 'Unclothed' signifies being deprived of the truths of faith. In general, men are driven by intellect and women by affections, and because of this men in the Bible generally represent knowledge and truth and women generally represent love and the desire for good. This generally carries over into marriage, where the man's growing knowledge and understanding and the woman's desire to be good and useful are a powerful combination. In many cases in the Bible, then, "husband" refers to things of truth and understanding, much as "man" does. Magnificent things can happen in a true marriage, though, when both partners are looking to the Lord. If a husband opens his heart to his wife, it's as though she can implant her loves inside him, transforming his intellectual urges into a love of growing wise. She in turn can grow in her love of that blooming wisdom, and use it for joy in their married life and in their caring for children and others in their life. Many couples, even in heaven, stay in that state – called "Spiritual" – growing deeper and deeper to eternity. There is the potential, though, for the couple to be transformed: through the nurturing love of his wife the husband can pass from a love of growing wise to an actual love of wisdom itself, and the wife can be transformed from the love of her husband's wisdom into the wisdom of that love – the actual expression of the love of the Lord they have built together. In that state – called "Celestial" – the husband represents love and the desire for good, and the wife represents truth and knowledge. Unfortunately it's hard to tell, when reading the Bible, which meaning of "husband" a particular passage has. For that we have to look to the Writings or to context for guidance. (Odkazy: Arcana Coelestia 1468, 3236, 4823) Like other common verbs, the meaning of "give" in the Bible is affected by context: who is giving what to whom? In general, though, giving relates to the fact that the Lord provides us all with true teachings for our minds and desires for good in our hearts, and for the fact that we need to accept those gifts while acknowledging that they come from the Lord, and not from ourselves. One of the most common and significant uses of "give" in the Bible is the repeated statement that the Lord had given the land of Canaan to the people of Israel. This springs from the fact that Canaan represents heaven, and illustrates that the Lord created us all for heaven and will give us heaven if we will accept the gift.
https://newchristianbiblestudy.org/cs/multi/bible_king-james-version_ezekiel_16_39/explanation_husband/explanation_give
Lineage is the transmission of a living energy from teacher to student. Lineages exist in many traditions, notably martial arts, healing arts and spirituality. The lineage represents an unbroken chain through which accumulated knowledge is passed from generation to generation. A spiritual teacher is someone who, in almost every instance, has been the student of a real guru. As students, they have spent years studying and practicing the techniques and methods transmitted by their gurus. They have carefully cultivated to a one-pointed focus a deep understanding of the highest power that dwells within. If such a person comes from a lineage and represents a respected tradition, then we have assurance that they have been well trained and are teaching from their genuine experience. Furthermore, if such a person has been empowered to teach by a guru, then it is because they have been recognized by their predecessors and peers to possess a profound inner gift. BHAGAVAN NITYANANDA The lineage held by Swami Chetanananda originated with Bhagavan Nityananda, one of the greatest Indian saints of the last century. Nityananda, whose name means "bliss of the eternal," lived in southwest India from around the turn of the 20th century until 1961. Details of his early life are difficult to verify, but from the 1920s until his passing, he was surrounded by an ever-increasing number of disciples and devotees. By the late 1930s he was established in Ganeshpuri in the countryside near Bombay (now Mumbai), where an active ashram developed around him. Nityananda would come into a small room in this ashram which was lit by a few bare electric light bulbs, and sit there quietly with his eyes open. People would come from all distances to see him because, in India, the mere viewing of a spiritual teacher, called darshan, is considered a profound and important blessing. Nityananda would sit in this space with his eyes open, simply establishing a connection with each visitor according to his or her capacity to experience and sustain that contact. Nityananda was well known in the districts of Maharashtra and Karnataka, where he is revered to this day. In its essence, Nityananda's teaching is profoundly simple. Like the ancient sages of many traditions, he said that anyone who merges the individual into the universal is an enlightened person. To realize the universal nature of one's own individual consciousness is the goal of sadhana (spiritual practice). However, it is hard to describe Nityananda's greatness to most Westerners since his most profound achievements were internal. He never explicitly identified himself with a particular spiritual practice or tradition. In fact, he rarely spoke at all. The thousands of people who came to see him did so because in him they experienced the miracle of pure consciousness in human form. Such a holy person is called an avadhut. Timeless and eternal, the avadhut is a direct link to the absolute, encompassing all teachers who precede him and all who follow. SWAMI RUDRANANDA (RUDI) One of the thousands of disciples who made their way to Ganeshpuri in the late 1950s was an American named Albert Rudolph (“Rudi”). Born in Brooklyn, New York, Rudi had been actively pursuing his spiritual development from a young age. At age 30, he was at a crossroads in his life when an associate took him to meet Nityananda at his ashram in Ganeshpuri. Rudi wrote, "My first meeting, in India in 1958, with the great Indian saint Bhagavan Nityananda was of such depth that it changed the course of my life." Rudi continued to study with Nityananda, and after Nityananda's mahasamadhi in 1961, traveled regularly to Ganeshpuri to visit his shrine and to study with Swami Muktananda. In 1966, Swami Muktananda initiated Rudi as a Swami into the Saraswati order, naming him Rudrananda, or "bliss of Rudra," a fiery and early aspect of the Hindu god Shiva. One of the first Americans to be recognized as a Swami, Rudi came back to the United States and established many ashrams across North America and Europe. Rudi was instrumental in exposing many Americans to the spirituality and rich cultures of the East. He had a deep respect and appreciation for these different spiritual and cultural traditions and saw a need for them to be presented in a way the West could comprehend. Though recognized as a Swami in India after many years of study, he was not as concerned with the form of Eastern tradition as he was with the content. Rudi saw the art and culture of Eastern spirituality as the symbol of something profound and universal, a truth that cut across all cultural boundaries. To that end, Rudi's teaching was direct and to the point, transmitting his profound understanding with a style that was uniquely his own. The foundation of Rudi's teachings was based on a deep personal wish to grow spiritually. Rudi talked about this wish to his students constantly. He described how a sincere wish to grow would lead to a deep and intense feeling which, as it matured in an individual, would evolve quite naturally into a deep love of God and of life. According to his teachings, a shift will happen as this wish to grow would be further transformed into a deep state of surrender. To grow spiritually, Rudi taught that we must live and work in the world from a deep internal state of surrender without any exceptions. Rudi passed away in 1973. Before he died, he designated Swami Chetanananda as his successor. Swami Chetanananda established The Movement Center to carry on Rudi's work. LAMA TSERING WANGDU RINPOCHE Lama Tsering Wangdu Rinpoche was born in the village of Langkor in West Dingri, Tibet, near the Mt. Everest region, in 1936. He began studying with his guru, Napdra Rinpoche, at age 8. When Rinpoche was in his early twenties, Napdra Rinpoche sent him on a traditional Chöd retreat to 100 cremation grounds, where he had many profound experiences. Upon conclusion of the retreat, his teacher acknowledged his accomplishment in the practice and sent him on pilgrimage to Nepal, where he arrived in 1958. Rinpoche eventually made his way to the Kathmandu Valley, where he settled in the Tibetan refugee camp in Jawalakehl. He lived for many years among the refugees, and became one of the few pujaris (ritual practitioners) accessible to both the Tibetan refugee population as well as the local Nepalese community. He is well-known in Kathmandu for his powers as a healer. You can view photos of Rinpoche in Kathmandu at the Nityananda Institute Nepal website. In addition to studying with Napdra Rinpoche, Rinpoche has received teachings and transmissions from many extraordinary lamas, including His Holiness the XIVth Dalai Lama, His Holiness Dudjom Rinpoche, His Eminence Surkhang Rinpoche, His Eminence Urgyen Tulku, and His Eminence Chatral Rinpoche. Rinpoche has received transmission of the entire Longchen Nyingthig teachings, an important tradition in the Nyingma school, and the complete teachings and practices of Padampa Sangye, a great Indian mahasiddha who is credited with bringing Chöd practice to Tibet. Swami Chetanananda met Rinpoche during a trip to Nepal in 1997, and they both describe their first encounter as the meeting of two currents. They immediately recognized the authenticity of the other’s spiritual work and the complementary nature of their practices. Since then, they have spent extensive time together, sharing knowledge and experiences. In August 1999, Rinpoche made his first trip to the United States, travelling to Bloomington, Indiana to attend the Kalachakra initiation and spending several months at The Movement Center in Portland, Oregon. He has spent several months each year in Portland and has visited Los Angeles, Boston, and New York City. During his visits, he has practiced with The Movement Center’s students, given teachings and initiations, and worked on translations of texts from Padampa Sangye, Machig Labdron and from the Longchen Nyingthig tradition. Rinpoche is a master of the practice of Chöd. Chöd is an ancient tantric practice that teaches about the essence of sacrifice. It is traditionally performed in cremation grounds and other frightening places where emotional energy is intensified. Using a drum, bell and thighbone trumpet, the Chöd practitioner summons all harmful spirits and offers them a visualized feast consisting of the practitioner’s own body. Through Chöd, a practitioner learns to cut through attachment to appearances and come to understand the underlying unity of all things. In the hands of a practitioner such as Lama Wangdu, Chöd is also a powerful ritual for physical and mental healing and pacifying environmental disturbances. Rinpoche is believed to be the last person of his lineage to have completed the traditional training. In March 2003, Rinpoche attended teachings by His Holiness the XIV Dalai Lama in Dharamsala, India. During that visit, His Holiness told him that the Zhi-je teachings of Padampa Sangye, including practices for the pacification of suffering, were especially precious and relevant to contemporary conditions. His Holiness asked Rinpoche to establish a monastery in Kathmandu to continue these teachings in Nepal. The monastery, Pal Gyi Dingri Langkor Jangsem Kunga Ling, was officially consecrated in November, 2004. Swamiji was among the many VIPs in attendance. Venerable Trulshik Rinpoche, one of the most prominent lamas of the Nyingma school, traveled from Solu Khumbu in the Himalayas for the occasion. He honored Lama Wangdu Rinpoche by cutting the ribbon to signify the official opening of the monastery. Audio and video recordings of Rinpoche’s performances of the Chöd, Phowa and Kusali Tsok are available from Rudra Press. More information about Rinpoche and a complete biography can be found at Rinpoche’s website.
http://chetanananda.org/home/about/lineage/
The principal monastery of the Drugpa Kagyu Tradition (‘Brug-pa bKa’-brgyud) is Sang-ngag Choling (gSang-sngags chos-gling dGon-pa). It was founded in 1512 by the Third Drugchen Rinpoche, Jamyang Chokyi Dragpa (‘Brug-chen ‘Jam-dbyangs chos-kyi grags-pa) (1478-1522). According to some histories, however, it was established by his reincarnation, the Fourth Drugchen Rinpoche, Pemakarpo (‘Brug-chen Pad-ma dkar-po) (1527-1592). Jamyang Chokyi Dragpa was the son of Tashi Dargye (bKra-shis dar-rgyas), the Prince of Jar (Byar). From his childhood, Jamyang Chokyi Dragpa had studied with many great masters of the Drugpa Kagyu Tradition and had achieved a high level of tantric realization. When he decided to build a monastery, he asked his father for material assistance. Because of his father’s gracious generosity, the monastery was also known as Jar Tashi Tongyon (Byar bKra-shis mThong-yon), "The Gift from Tashi of Jar’s Esteem." It was started with a Shedra Teaching College (bShad-grva) of 200 monks. The Drugpa lineage is one of the eight minor Dagpo Kagyu Traditions (Dvags-po bKa-brgyud brgyud-chung brgyad) deriving from disciples of Pagmodrupa (Phag-mo gru-pa rDo-rje rgyal-po) (1110-1170). Pagmodrupa, in turn, was a great disciple of Gampopa (sGam-po-pa, Dvags-po Lha-rje bSod-nams rin-chen) (1079-1153). It was founded by Lingrepa Pema Dorje (gLing Ras-pa Pad-ma rdo-rje) (1128-1188) and his disciple Tsangpa Gyare Yeshe Dorje (gTsang-pa rGya-ras Ye-shes rdo-rje) (1161-1211). The line of Drugchen Rinpoches (‘Brug-chen Rin-po-che), reincarnations of Tsangpa Gyare, has been its traditional head. In 1205, Tsangpa Gyare had founded Namgyipur Monastery (gNam-gyi phur dGon-pa) in Kyime (sKyid-smad). At its opening, there were three extremely loud thunderclaps. Because of this, the monastery was given the popular name of Drug Gon (‘Brug dGon), or Thunder Monastery. "Drug" is Tibetan for "thunder." The Drugpa Tradition received its name from this monastery. Drugpa Kagyu has three divisions: Todrug (sTod-‘brug), Medrug (sMad-‘brug) and Bardrug (Bar-‘brug) – the Drugpa of Upper, Lower, and Middle Tibet, respectively. They derive from three disciples of Tsangpa Gyare. Sang-ngag Choling Monastery is from the Bardrug, the Middle Drugpa Tradition. Several lines developed within the Bardrug Tradition. The Lhodrug (Lho-‘brug) or Southern Drugpa lineage was begun by Ngawang Namgyal (Ngag-dbang rnam-rgyal) (1594-1651), the First Zhabdrung (Zhabs-drung) of Bhutan (‘Brug-yul). Zhabdrung Ngawang Namgyal was the reincarnation of the Fourth Drugchen Rinpoche, Pemakarpo. As there was another claimant to the throne of the Drugchen Rinpoche, Zhabdrung Ngawang Namgyal went into exile in Bhutan. He founded many monasteries there and politically unified the country. His reincarnations, the subsequent Zhabdrungs, became the spiritual and political rulers of Bhutan. The Tibetan name for Bhutan "Drug-yul" (‘Brug-yul), "Thunder Land," derives from the Drugpa Kagyu Tradition. Sang-ngag Choling Monastery is sometimes associated with the Southern Drugpa line. The monks at Sang-ngag Choling trained in the teachings that are common to all Dagpo Kagyu lineages and in those that are specific to the Drugpa Kagyu. Thus, like the other lineages deriving from Gampopa, they studied and practiced the Six Teachings of Naropa (Na-ro chos-drug, Six Yogas of Naropa). The special teachings of the Drugpa Kagyu are "Ro-nyom" (ro-snyoms), "The Equal Taste." They were hidden as a treasure teaching (gter-ma) by Rechungpa (Ras chung-pa rDo-rje grags-pa) (1083-1161) who, like Gampopa, was a disciple of Milarepa (Mi-la Ras-pa bZhad-pa rdo-rje) (1040-1123). The Equal Taste teachings were discovered and spread by Tsangpa Gyarae. The other special teaching of the Drugpa Kagyu is "Tendrel" (rTen-‘brel), "The Dependent Arising" tradition. Prior to 1959, Sang-ngag Choling had over 400 monks. It has been the traditional seat of the Drugchen Rinpoches, who have served as its abbot. The present Twelfth Drugchen Rinpoche has reestablished Sang-ngag Choling in Darjeeling, West Bengal.
https://studybuddhism.com/en/advanced-studies/history-culture/monasteries-in-tibet/kagyu-monasteries-drug-sang-ngag-choling
Morchen Kunga Lhundrub is the epitome of non-sectarianism, known to have upheld and respected many lineages equally and without any problems. As a highly influential master of the Sakya tradition, he was also revered by the Gelugpas as a lineage master of Naropa’s Vajrayogini. Within his own sect, Morchen was revered as a lineage holder of the Sakya Path and Result. His early life was typical of great masters, having being recognised at a young age and ordained by the 28th Sakya Throne Holder Jamgon Amyeshab who later would confer upon Morchen may initiations and transmissions. These included a long life initiation, rong tsong’s six transmissions of the Perfect Wisdom and an initiation into Mahakala’s practice. As a young monk, Morchen would travel to Sakya where he met with Padma Trinley. It was then that Morchen took his full ordination vows from this master who, coincidentally, had conducted a fire puja to burn Dorje Shugden at the request of the Fifth Dalai Lama. Although Padma Trinley was to be Morchen’s ordination master, Morchen was unable to receive Lamdre teachings from him – after receiving his ordination, Morchen fell seriously ill and was unable to recover in time. Thus, Morchen received these teachings from Kenrab Jampa and went on to become his heart disciple. Until his passing in 1728, Morchen worked tirelessly to spread the Dharma throughout Tibet. He was a model of non-sectarianism through his work. For example, to passed to his Gelug disciple Jamyang Dewa Dorje, the transmission of Marpo Korsum, a Sakya practice which is part of the 13 Golden Dharmas. He was also abbot of many monasteries, including Mor, Rawa Mey and Tashi Chodey. Morchen bore a close relationship with Dorje Shugden, entrusting activities to the Dharmapala who was happy to accept. He also gave initiations into this Protector’s practice at Trode Khangsar in Lhasa, which were received by the Gyalchen oracle. Also at Gaden Ling, Morchen performed a consecration of the Gyalchen Tenkhang. Not all of Morchen’s works are not openly available. From what is available however, we know that Morchen wrote a ritual for gyabshi, an obstacle-clearance puja composed by Shakyamuni Buddha himself. He also co-authored the lower volume of Petition to Dorje Shugden Tsel: Granting all Desired Activities, the upper volume having been composed by Drukpa Kunley of the Drukpa Kagyu sect. This text would become very central to the practice, used in prominent Dorje Shugden temples such as Trode Khangsar and Riwo Choeling, and also incorporated into rituals written by Serkong Dorje Chang centuries later. Morchen’s contribution to this seminal text was an expansion of the foundation laid by Drukpa Kunley, and included the ritual origins of Dorje Shugden, as well as what is probably the earliest iconographic description of Dorje Shugden and his four cardinal emanations. Morchen gave detailed descriptions of the activities of the four cardinal emanations – peaceful, increasing, control and wrathful – as well as wrote praises to them. His writings were so influential that up to this day, practitioners continue to rely on his descriptions when painting Gelug thangkas and performing rituals to Dorje Shugden. Prior to Morchen’s writings, Dorje Shugden was described as riding a horse and the Sakyas had relied on that description when propitiating him. Morchen however, described Dorje Shugden as being on a lion throne – this has since been the only description of the principle emanation in such a form. According to Trijang Rinpoche, Morchen also wrote A Presentation of the King’s Three Activities. Copies of it however, have not been found and thus the works only continue to exist in name. Given the calibre of his works which we have access to, it is unfortunate that more of Morchen’s compositions are not available to us. MORE ENLIGHTENED LAMAS: - Drubwang Drukpa Kunley of the 17th Century (Dreuley lineage) - Morchen Kunga Lhundrub (1654-1728) - Lobsang Tamdin (1867 - 1937) DORJE SHUGDEN CHAPEL (Lhasa, Tibet) – built by The Dalai Lama January 24, 2012 by admin Filed under Monasteries View the original video on YouTube: http://www.youtube.com/watch?v=Ehr8ePFyWoY In the 17th Century, the Fifth Dalai Lama had Trode Khangsar built in dedication to the Protector Dorje Shugden. The main image inside was also commissioned by the 5th Dalai Lama. By the end of the 17th Century, the Fifth Dalai Lama’s Regent Desi Sangye Gyatso entrusted Trode Khangsar to Riwo Choling, a Gelug Monastery. Today it is in full use and located behind the main Chapel of Jowo Buddha or central Cathedral of Lhasa just off the main circumambulation circuit or barkor. Many pilgrims visit and monks are available daily performing pujas/ceremonies to Dorje Shugden daily. It is open to tourists. This chapel is over 350 years old in the heart of Lhasa Dorje Shugden Chapel is 8 mins walk from Jokhang More information on Trode Khangsar can be found in this book, page 195-199. It is available on Amazon.com Book Details - Hardcover: 336 pages - Publisher: Serindia Publications; illustrated edition edition (November 15, 2005) - Language: English (from the front flap of this book) The Temples of Lhasa is a comprehensive survey of historic Buddhist sites in the Tibetan capital of Lhasa. The study is based on the Tibet Heritage Fund’s official five-year architectural conservation project in Tibet during which the author and his team had unlimited access to the buildings studied. The documented sites span the entire known history of Tibetan Buddhist art and architecture from the 7th to the 21st centuries. The book is divided into thirteen chapters, covering all the major and minor temples in historic Lhasa. These include some of Tibet’s oldest and most revered sites, such as the Lhasa Tsuklakhang and Ramoche, as well as lesser-known but highly important sites such as the Jebumgang Lhakhang, Meru Dratsang, and Meru Nyingpa. It is illustrated with numerous color plates taken over a period of roughly fifteen years from the mid-1980s to today and is augmented with rare photographs and reproductions of Tibetan paintings. This book also provides detailed architectural drawings and maps made by the project. Each site has been completely surveyed documented and analyzed. The history of each site has been written — often for the first time — based on source texts and survey results, as well as up-to-date technology such as carbon dating, dendrochronology, and satellite data. Tibetan source texts and oral accounts have also been used to reconstruct the original design of the sites. Matthew Akester has contributed translations of Tibetan source texts, including excerpts from the writings of the Fifth and Thirteenth Dalai Lamas. This documentation of Tibetan Buddhist temple buildings is the most detailed of its kind, and is the first professional study of some of Tibet’s most significant religious buildings. The comparative analysis of Tibetan Buddhist architecture covers thirteen centuries of architectural history in Tibet. MORE GREAT MONASTERIES: - Gaden Tharpa Choling Monastery - Riwo Choeling Monastery at Lhoka (Shannan) Prefecture, Tibet - Magnificent Dorje Shugden chapel in Tashi Lhunpo Monastery, Tibet - DORJE SHUGDEN CHAPEL (Lhasa, Tibet) - built by The Dalai Lama - Trijang Rinpoche's Sampheling Monastery at Chatreng - World's largest Dorje Shugden statue in Gonsa Monastery, Kham - Dagom Gaden Tensung Ling Buddhist Monastery - Yangting Dechen Ling Monastery, Kham, Tibet - Trashi Chöling Hermitage - Serpom Thösam Norling Monastery, Bylakuppe, India (updated) - Dorje Shugden Monastery in Chakzamka, Riwoche, Tibet Kham Area - Phelgyeling Monastery - Dorje Shugden in Denma Gonsa Rinpoche's Monastery - Shar Gaden Monastery Lamrim teaching by Venerable Geshe Thupten - Shar Gaden Monastery, India - Monastery with Dorje Shugden in Nepal - Monastery with Dorje Shugden in Chamdo, Tibet - First Dorje Shugden Temple in Taiwan (Hualian) - Monastery with Dorje Shugden in Mongolia - Tritul Rinpoche's Temple in New Zealand Introduction to Dorje Shugden A Dharma Protector is an emanation of a Buddha or a Bodhisattva whose main functions are to avert the inner and outer obstacles that prevent practitioners from gaining spiritual realizations, and to arrange all the necessary conditions for their practice. In Tibet, every monastery had its own Dharma Protector, but the tradition did not begin in Tibet; the Mahayanists of ancient India also relied upon Dharma Protectors to eliminate hindrances and to fulfil their spiritual wishes. Though there are some worldly deities who are friendly towards Buddhism and who try to help practitioners, they are not real Dharma Protectors. Such worldly deities are able to increase the external wealth of practitioners and help them to succeed in their worldly activities, but they do not have the wisdom or the power to protect the development of Dharma within a practitioner’s mind. It is this inner Dharma – the experiences of great compassion, bodhichitta, the wisdom realizing emptiness, and so forth – that is most important and that needs to be protected; outer conditions are of secondary importance. Although their motivation is good, worldly deities lack wisdom and so sometimes the external help that they give actually interferes with the attainment of authentic Dharma realizations. If they have no Dharma realizations themselves, how can they be Dharma Protectors? It is clear therefore that all actual Dharma Protectors must be emanations of Buddhas or Bodhisattvas. These Protectors have great power to protect Buddhadharma and its practitioners, but the extent to which we receive help from them depends upon our faith and conviction in them. To receive their full protection, we must rely upon them with continuous, unwavering devotion. Buddhas have manifested in the form of various Dharma Protectors, such as Mahakala, Kalarupa, Kalindewi, and Dorje Shugden. From the time of Je Tsongkhapa until the first Panchen Lama, Losang Chökyi Gyaltsän, the principal Dharma Protector of Je Tsongkhapa’s lineage was Kalarupa. Later, however, it was felt by many high Lamas that Dorje Shugden had become the principal Dharma Protector of this tradition. There is no difference in the compassion, wisdom, or power of the various Dharma Protectors, but because of the karma of sentient beings, one particular Dharma Protector will have a greater opportunity to help Dharma practitioners at any one particular time. We can understand how this is so by considering the example of Buddha Shakyamuni. Previously the beings of this world had the karma to see Buddha Shakyamuni’s Supreme Emanation Body and to receive teachings directly from him. These days, however, we do not have such karma, and so Buddha appears to us in the form of our Spiritual Guide and helps us by giving teachings and leading us on spiritual paths. Thus, the form that Buddha’s help takes varies according to our changing karma, but its essential nature remains the same. Among all the Dharma Protectors, four-faced Mahakala, Kalarupa, and Dorje Shugden in particular have the same nature because they are all emanations of Manjushri. However, the beings of this present time have a stronger karmic link with Dorje Shugden than with the other Dharma Protectors. It was for this reason that Morchen Dorjechang Kunga Lhundrup, a very highly realized Master of the Sakya tradition, told his disciples, “Now is the time to rely upon Dorje Shugden.” He said this on many occasions to encourage his disciples to develop faith in the practice of Dorje Shugden. We too should heed his advice and take it to heart. He did not say that this is the time to rely upon other Dharma Protectors, but clearly stated that now is the time to rely upon Dorje Shugden. Many high Lamas of the Sakya tradition and many Sakya monasteries have relied sincerely upon Dorje Shugden. In recent years the person most responsible for propagating the practice of Dorje Shugden was the late Trijang Dorjechang, the root Guru of many Gelugpa practitioners from humble novices to the highest Lamas. He encouraged all his disciples to rely upon Dorje Shugden and gave Dorje Shugdän empowerments many times. Even in his old age, so as to prevent the practice of Dorje Shugdän from degenerating he wrote an extensive text entitled Symphony Delighting an Ocean of Conquerors, which is a commentary to Tagpo Kelsang Khädrub Rinpoche’s praise of Dorje Shugden called Infinite Aeons. (Source: http://www.wisdombuddhadorjeshugden.org/dorjeshugden-about.php) MORE GREAT ARTICLES: - Ven Geshe Tenzin Dorje - Pilgrimage to India - Lamrim Shugden - Wisdom Buddha Dorje Shugden’s two functions - Excerpt from a speech delivered by His Eminence Dagpo Rinpoche in November 1996 - Great Prayer Festival at Shar Gaden - A Tribute to His Holiness Kyabje Trijang Rinpoche - The decision to surrender - Dorje Shugden at the mother monastery of Kyabje Zong Rinpoche - Emperors of China - Test Of Faith - Food Offering Prayers - The Line of Gaden Tripas - Validity of Oracles - Possession by Dorje Shugden - The Oracle: Reflections on Self - Dorje Shugden Enthroned by Chinese Emperor & the Dalai Lama - Interview with His Holiness the 101st Gaden Tripa Lungrik Namgyal - The Fifth Dalai Lama and Shunzhi Emperor of China - Mig-Tse-Ma chakra - Lama Tsongkhapa's tooth, bowl and Kedrup Je's Yamantaka statue - Lama Tsongkhapa’s mala, bell and hat - Lama Tsongkhapa's holy tooth relic - Daknak Rinpoche celebrates Lama Tsongkhapa Day in Taiwan - Drepung Monastery - History & Lineage of Dharmapala Dorje Shugden by Kyabje Zong Rinpoche - Long Life to Kyabje Yongyal Rinpoche & Domo Geshe Rinpoche - Kyabje Yongyal Rinpoche's first visit to Shar Gaden - At the Opening of Serpom Monastery - Zemey Rinpoche's stupa and ladrang - Trijang Rinpoche's statue, stupa and oil painting in Trijang Ladrang - Holy site of "Liberation in the Palm of your Hand" - Namka Barzin and Dorje Shugden images made/drew by Domo Geshe Rinpoche - Dorje Shugden at Zululand - A Healing and Wisdom Meditation of Dorje Shugden - Panglung Oracle & Chushi Gangdruk - Dharmapala Setrab Chen - Four Faced Mahakala - Kache Marpo - SAKYA THRONE HOLDERS: SONAM RINCHEN (1705-1741) & KUNGA LODRO (1729-1783) - THIS IS ONE VERSION ON THE ARISAL OF NAMKA BARZIN - The Summer Retreat - 2011 - The Geluk Exam - 2011 - Kyabjye Zong Rinpoche performing Fire Puja - Geshe Yeshe Wangchuk - Kyabje Yongyal Rinpoche - The Tradition of Oracles - Kangxi Emperor - Cultivation of Rice Fields - Nechung : The State Oracle of Tibet - The history and significance of the Dharmapalas - Dharma Protector Dorje Shugden - A Teaching Given By His Eminence Shenpen Dawa Rinpoche (on Nyingma Protector Shenpa) - A teaching on Dharmapalas, from a Kagyu perspective by Choje Lama Namse Rinpoche - Nagarjuna’s Life, Legend and Works - Dorje Shugden and Saint George - Brothers in Arms - A Sakya Tale - Spiritual Lineage - Pabongka Rinpoche and the Gelugpa Tradition - Introduction to Dorje Shugden - The Way to Rely Upon Dorje Shugden - Kyabje Ling Rinpoche and Dorje Shugden - Panchen & Shugden - Gelukpa Guru Tree, Updated by Kyabje Dagom Rinpoche - Famous Oracle of Dungkar Monastery - Guru Yidam Protector - The 'Library of Tibetan Works and Archives' (LTWA) Published Texts Authored by Dorje Shugden's Previous Incarnation - Comment on Karmapa's Statement - Universal Protector of Future Buddhism - The Dalai Lama's and Tibetan Buddhism's Way into Our World - What Gyalchen Dorje Shugden Wants - Famous Oracle of Dungkar Monastery - Powerful Protection Against Spirits or Black Magic Places to Worship Dorje Shugden July 29, 2009 by 008 Filed under Starter Kit | | Introduction and Lineage | | The Benefits of Dorje Shugden’s Practice Today, Dorje Shugden worship is available in several prominent places around the world. However, the birthplace of Dorje Shugden, Trode Khangsar, in Lhasa, Tibet is most world-renowned. It was predicted that the practice of Dorje Shugden will grow and become mainstream in the world. The progress towards the fulfilment of this prophecy is reflected by the growth of Dorje Shugden temples in various parts of the world. Among the many monasteries built in dedication of Dorje Shugden’s practice are these listed below. Trode Khangsar Trode Khangsar, in the heart of Lhasa, was the first official temple dedicated to the Protector Dorje Shugden. In the 17th century, the 5th Dalai Lama designated Trode Khangsar as a “Protector House” for Dorje Shugden. By the end of the 17th century, Trode Khangsar’s importance increased when Sangye Gyatso, the 5th Dalai Lama’s regent, entrusted it to the Gelugpa monastery Riwo Choling; this underlined the close bond between Dorje Shugden and the Gelugpa sect, the Tibetan government and Gaden Podrang. Shar Gaden Monastery Shar Gaden is located in Mundgod, South India, next to Gaden Monastery, 25 minutes away from Drepung. Currently, it is home to more than 750 Tulkus, Geshes, Masters and monks who keep the Dorje Shugden lineage alive. Eminent Lamas such as H.H. Kyabje Trijang Rinpoche and Domo Geshe Rinpoche have also joined Shar Gaden. In Shar Gaden Monastery, the practices, debates, pujas and teachings of Dorje Shugden as well as other great lineages and practices like Tara and Medicine Buddha, live on. Visit Shar Gaden’s Website: http://shargadenpa.org/ Watch a video of Shar Gaden Monastery in India: http://dorjeshugden.com/wp/?p=3165 Sampheling Monastery Sampheling Monastery is Trijang Rinpoche’s personal monastery situated in Chatreng District in Kham, Tibet. Here, monks and lay people remain as devoted as ever to Trijang Rinpoche and the practice of Dorje Shugen. Denma Gonsa Rinpoche’s Monastery in Kham, Tibet Denma Gonsa Rinpoche is a great senior Lama and student of both Kyabje Trijang Rinpoche and Pabongka Dechen Nyingpo Rinpoche. His monastery in Tibet continues to teach a pure and unbroken lineage of Dharma teachings to both laypeople and 600 monks. This monastery is very famous for its 12-storey statue of Lama Tsongkhapa, the largest Tsongkhapa statue in the world at 101ft tall. The yellow building next to it houses a beautiful Dorje Shugden Chapel with the largest Dorje Shugden statue in the world at 18ft tall. Dorje Shugden is the main protector of this monastery. Watch a video of the Dorje Shugden Shrine in the monastery: http://dorjeshugden.com/wp/?p=3694 Phelgyeling Monastery Phelgyeling Monastery moved from Nyanang, Tibet to its current location in Kathmandu, Nepal. This monastery houses the very first statue of Dorje Shugden made by the 5th Dalai Lama and to this day, they continue to uphold and propagate the sacred lineage of this protector. Serpom Norling Monastery The formation of Serpom Norling Monastery is very similar to that of Shar Gaden. A large group of monks who are committed to continue this Protector practice left Sera Monastery to establish a new monastery nearby, called Serpom Norling. Visit the official site of Serpom Monastery: http://serpommonastery.org/ Video of Serpom Monastery View the original video on youtube: http://www.youtube.com/watch?v=dGZGrb-IPyQ Amarbayasgalant Monastery Guru Deva Rinpoche’s Amarbayasgalant Monastery in Mongolia propitiates Dorje Shugden as one of its Dharma Protectors. Click to watch video of Amarbayasgalant Monastery in Mongolia with Dorje Shugden Hua Lian The first Dorje Shugden Temple in Taiwan. This temple is being built by the efforts of Serkong Tritul Rinpoche of Gaden Jangtze Monastery and his devoted students. Tritul Rinpoche’s Monastery in Nepal and Auckland, New Zealand Dorje Shugden will be the main protector in this monastery.
http://www.shugdentoday.com/?tag=trode-khangsar
Please click here to see the HUD update regarding the PBCA re-bid and HUD's recent posting of a Request for Information (RFI) from potential suppliers to PBCA program. The RFI is available here. As stated in the RFI, "The purpose of this RFI primarily involves refining our [HUD] approach and identifying opportunities and challenges that the PBCA program may face in implementing an acquisition strategy that could (1) combine both regional and national acquisitions, (2) address potential set-asides for small business versus unrestricted competition, and (3) identify services that could be obtained using fixed price, performance incentives, cost reimbursements, or other pricing types." HUD is requesting contractors provide a brief description of how each of the following performance-based tasks can be accomplished at the national and/or regional level. The six tasks include: In addition to the tasks, HUD specifically is requesting contractors address the following questions: NAHMA cannot respond to this RFI, since we are not a contractor. NAHMA's TRACS and CA Committee will be monitoring the process. If members would like to share any thoughts and concerns with the RFI, I encourage you to provide feedback to me, as soon as possible. We will be discussing this topic at the October meeting with HUD. COVID-19 information: Click here to view the latest updates on COVID-19 NAHMA Update: CDC Orders Temporary Eviction Moratorium on September 4, 2020 - September 2, 2020 RD Essentials Webinar - Oct 27 Income Calculations in Times of Uncertainty: Move-ins, Annual Recertifications and Interim Certifications Webinar - Oct 28 COVID-19 Communication Challenges with Staff and Residents & Tools to Address Them Webinar - Nov 10 Understanding the Unique Behavioral Challenges Associated with Untreated Mental Health Conditions Webinar - Nov 12 Specialist in Housing Credit Management® (SHCM®) certification - Nov 12, 13, 19 & 20 Verification Techniques the Promote Accuracy - Nov 17 Preparing for a LIHTC Management Review - Dec 10 Medical Deduction - Dec 15 Webinar Catalog:
https://www.ahma-wa.org/nahma-hud-update--hud-issues-request-for-information-from-potential-pbca-program-suppliers
We never stop improving and that’s why we’re successful. See how we’re improving facilities, programs, patient safety and patient satisfaction. We’re the most recognized community hospital in the state and it’s our people who make us great. See hospital and staff awards. Winchester Hospital was the first community hospital in the state to achieve Magnet designation, recognition for nursing excellence. Learn why. Our tremendous staff gives back to our community by coordinating free health screenings, educational programs, and food drives. Learn more. A leading indicator of our success is the feedback we get from our patients. See what they’re saying about their experiences. Family and friends brighten a patient's days and can help speed the recovery process. At Winchester Hospital, our goal is to provide an environment that promotes healing and provides a positive experience for patients and visitors. Because we recognize the value of emotional support during the healing process, we do not have defined visiting hours. However, some specialty areas (like the Intensive Care Unit) have more restrictive visitor policies for clinical reasons. Some things to keep in mind when you visit patients at Winchester Hospital: To help protect the health of our patients, we ask that visitors be free from colds, fever, rashes, chickenpox or other contagious illnesses. However, there are unique circumstances where we understand that visitors who are ill need to visit. In this case, we request that visitors who have a respiratory illness wear a mask. You and your guests are asked to show consideration for other patients by talking quietly and keeping the television at a low volume. Visitors in semi-private rooms should be considerate of both patients. Our nursing staff may request that your visitors keep visiting time to a minimum to ensure both your care and your roommate’s care are not compromised. We encourage family visits; however, small children should never be allowed to sit or lie down on the floor or on the patient’s bed. Some things to keep in mind when you visit patients in the ICU at Winchester Hospital: We understand the desire for family to be present with a patient during this critical period. The nursing staff will work with each family and be as flexible as possible in arranging visits to ensure your needs and those of the patient are met. At any time, we may have to ask visitors to leave due to other clinical situations in the ICU. Please understand this is to assist us in delivering needed care. We ask that all patients have one designated person or family member with whom the nursing and medical team may communicate. This person can then keep other members of the family informed. This provides for consistency in communication as well as ensuring that the patient’s privacy is maintained. Should there be any questions or concerns regarding equipment or ICU procedures, please speak with the nurse caring for your loved one. The nurse manager responsible for the ICU is available to address any concerns you may have. Ask the nursing staff to have him/her paged. We encourage parents to stay with their child in the Pediatric Unit at Winchester Hospital, because this can help a child feel comfortable in an unfamiliar environment. One parent is welcome to stay with a child overnight. Other visitors are welcome between the hours of noon and 8 p.m. Some things to keep in mind when you visit in the Winchester Hospital Special Care Nursery: Parents are encouraged to call any time to check on their child’s status. Patients are advised to check the next scheduled feeding time since the best time to visit your infant is just before and during the feeding. Parents may visit the nursery at any time except during nursing staff shift change from 7 to 7:30 a.m. and 7 to 7:30 p.m. Siblings are encouraged to visit; however, hospital policy requires that they are free from infections and illnesses or exposure to either. For safety reasons, all children must be supervised by an adult. Grandparents are welcome but must be accompanied by the infant’s parents. No information will be given to anyone other than parents. The number of visitors at the bedside is limited to two. Through Winchester Hospital’s Grateful Patient Program, you can honor your physician, nurse or caregiver with a contribution to the hospital. It’s a wonderful way to say thank you for the care you or a family member received. At every stage of life and for every medical need, Winchester Home Care can help, with home care services for people facing challenges of later life stages, with new babies, with disabilities, fighting illnesses, and more. Winchester Hospital offers a variety of rehabilitation services – including cardiac and pulmonary rehabilitation as well as physical, occupational and speech therapy – to help you get back to your normal routine.
http://www.winchesterhospital.org/my-visit/preparing-for-a-visit/visiting-hours--information
Due to increased turnover at Wynn Regional Medical Center (WRMC), analysis from the exit interview questionnaires were reviewed in a Board of Directors meeting. While reviewing the analysis, it was noted that “communication concerns” were common areas of dissatisfaction among the employees who resigned. The employees felt that: - Managers did not hold regular staff meetings - Employees were often not informed about changes in the organization - Managers did not ask employees for their input or feedback in decision making - Employees were unable to voice their concerns without retaliation - Managers did not communicate in a respectful manner - You have been tasked with creating a presentation to address these communication concerns. Instructions The CEO of WRMC has requested that you create a PowerPoint presentation proposing a revised communication process for the board of directors. The presentation should contain speaker notes for each slide or voiceover narration. Based on the specific concerns listed in the scenario, your presentation should address how you will change the communication processes and include the following key points: - What are some of the possible communication barriers and challenges of a multi-cultural healthcare facility? Include recommendations for addressing these barriers and challenges. - What communication processes and practices will you put in place to address the following employee concerns? - Staff meetings - Organizational updates - Employee input in decision making - Employee concerns - Respectful communication - What recommendations do you have for increasing departmental communication?
https://idealnursingessays.com/scenario-due-to-increased-turnover-at-wynn-regional-medical-center/
Dr Shamaila Anwar is a member of the NHS Muslim Women’s Network, as well as a writer and a science communicator. She works with organisations and communities to tailor health messaging through an inclusive community focused approach, helping to empower people to make informed decisions about their health. Being aware of, celebrating and embracing different backgrounds, heritages, cultures and experiences that your workforce come from, provides organisations with a strong foundation on which to build trust and develop a nuanced understanding of how to engage with different communities. It is vital that organisations understand that if they want to engage communities they have seldom engaged with before, these communities need to be reflected in the workforce, but the workforce from these communities need to feel valued and empowered to represent the organisation. What is intersectionality? Intersectionality is a framework for conceptualising a person, group of people, or social problem as affected by a number of discriminations and disadvantages. It takes into account people’s overlapping identities and experiences in order to understand the complexity of prejudices they face. In other words, intersectional theory asserts that people are often disadvantaged by multiple sources of oppression: their race, class, gender identity, disability, sexual orientation, religion, and other identity markers. Intersectionality recognises that identity markers (for example, ‘woman’ and ‘black’ do not exist independently of each other). One size does not fit all I am a British Pakistani Muslim. I am a woman, I suffer from asthma and have a hidden disability. I am completely different to all other colleagues of colour. In fact, at times I would go so far as saying I don't even represent my own community. What I am trying to say is that we are all different. What engagement strategies work for me may not work for others. Psychological safety Creating psychological safety is in everyone’s interest and yet many of us don't understand what we mean by this. To foster a psychologically safe space and environment means you allow someone to be themselves and to authentically feedback any concerns or issues they may have, without fear of instigating a defensive response. It is hard at first to not take things personally. But it is important, particularly in terms of workplace culture and when engaging with people from under-served groups, to ensure we listen, learn to accept feedback gracefully, acknowledge the errors and learn from them. If our colleagues feel valued and safe, this will project outwards and lay the foundations for developing trust with communities. What’s in a name? I have lost count of the number of terms we have come up with over the years to describe communities we have failed to engage with. ‘Hard to reach’, ‘underrepresented’, ‘seldom heard’, ‘BAME’, ‘under-served’. I understand the sentiment is a helpful one. We feel if we can describe the issue we can resolve it, but are these terms actually helpful when we think about building real connections with real people? By its very nature, inclusivity is highly intersectional, so by trying to identify a term that describes all communities, are we going against the very essence of what we are trying to achieve? For many communities, this may lead to further marginalisation and minoritisation if they don't fit into the definition. Try to be as specific as possible depending on the communities you are engaging with, if you are not sure, ask how they would like to be referred to. Re-thinking your approach to engagement Empowerment, feeling seen, heard and understood are at the heart of engaging with our diverse workforce and also communities. It’s crucial that we get this right as this will inform how we enter these spaces, sensitively and respectfully. Engagement with diverse communities within our organisation needs to reflect how we aim to engage with these communities outside of it. Tips - Train your managers so they understand the importance of engaging with and understanding the diversity and intersectionality of their staff, their cultures and challenges they may be facing as an individual. - Engage with your diverse workforce and communities and allow them to provide authentic responses and feedback. - Familiarise yourself with history, including the contributions different cultures and communities have made to society, this leads to more open conversations based on an equal footing. - Encourage people to ask questions, no one will be offended if you ask them about their culture, religion or heritage – what is offensive is often the assumptions that are made because people are too afraid to ask. Fear is not a reason to be misinformed. - Ensure appropriate representation around the table. - Open up the lines of communication and listen, consider how you respond, so you don’t reduce someone’s opinion or lived experience. - Empower your diverse workforce and encourage them to establish support networks, recognise the importance of allyship and what that means. Dr Shamaila Anwar is a member of the NHS Muslim Women’s Network, follow the network on Twitter @NHSMuslimWomen.
https://www.nhsemployers.org/articles/understanding-intersectionality-and-engaging-diverse-staff-and-communities
Provide leadership for the development, implementation and evaluation of a quality care and educational program for young children at the Curry Kids Early Learning Centre under the leadership of the Director Child Care. All employees are required to abide by the policies, Code of Conduct, procedures, philosophies and all statutory requirements of Cloncurry Shire Council (“Council”) and Curry Kids Early Learning Centre while providing quality care and education for young children at the Centre. This outlines the general duties and responsibilities of the position, but is not all encompassing: ➢ Formulate age appropriate and inclusive programmes in consultation with families and the Child Care Assistant that meets the child’s developmental needs; ➢ Ensure all programmes and care conform to the Early Years Learning Framework and National Quality Framework; ➢ Record, monitor, evaluate and document the development of each child to develop the programme to meet each child’s needs; ➢ Provide feedback to the Director and discuss issues of concern that could contribute to the improvement of the programme delivered to children at the centre and their families; ➢ Maintain open lines of communication with families about the developmental needs and interests of their children and encourage families’ participation in the programmes; ➢ Ensure all children are directly supervised at all times as per the Supervision Policy; ➢ Support young children and their families at separation as per the Separation Policy; ➢ Develop an environment which is relaxed, home-like, aesthetically pleasing and safe and secure for children and staff to stay and work in; ➢ Provide direction and support for Assistants and any students in regard to the goals and programmes in place; ➢ Consult with the Director in relation to concerns about the functioning of a team member if the issues cannot be resolved directly with the team member concerned; ➢ Encourage and Support Assistants to be actively involved in the keeping of developmental records of the children in care; ➢ Actively participate in staff meetings and training opportunities as required; ➢ Share professional knowledge and expertise with other staff members while recognising and acknowledging theirs; ➢ Respect and encourage the individuality of each child; ➢ Monitor children who may be experiencing challenges and in conjunction with the Director and families the assistance of support agencies available in the community; ➢ Consult with the Director on any matters of concern regarding any child or their family; ➢ Maintain complete confidentiality regarding information of a child and their family; ➢ Participate in the daily preparation of materials and environment and notify the Director of any items that are unsafe or require maintenance and dispose of when necessary; ➢ Be accountable for the preparation of specific documentation for the running of a room; ➢ Completion of Certificate III or IV in Children’s Services (Group Leader 1 year qualified) and proof of enrolment in AQF Diploma in Children’s Services; or ➢ Completion of an AQF Diploma in Children’s Services (Group Leader 2 year qualified); or ➢ Completion of an AQF Advanced Diploma or higher in Children’s Services or Education (Group Leader 3 year qualified). ➢ Have a current First Aid, CPR and Anaphylaxis Certificate or the ability to acquire before commencement; ➢ Have a current Positive Notice Working with Children Blue Card or the ability to acquire before commencement.
http://www.lgassist.com.au/career/108971/Early-Childhood-Group-Leader-Qualified-Queensland-Qld-Cloncurry
BRETTON WOODS – The Foundation for Healthy Communities recently honored Catholic Medical Center with its 2016 Noah Lord Award for Patient & Family Engagement during the Foundation’s annual meeting at the Omni Mount Washington Resort in Bretton Woods. The Foundation for Healthy Communities presented the 2016 Noah Lord Patient & Family Engagement Award to Catholic Medical Center for their project titled, The Voice of the Patient: Improving the Patient Experience by Listening to the Voice of the Patient. Accepting the award on behalf of Catholic Medical Center was Barbara McGuire who currently serves as an Advisor on the hospital’s Patient and Family Advisory Council. “By engaging the patient at the bedside to address any concerns or challenges they might be facing, Catholic Medical Center is able to immediately impact that patient’s experience, improving the communication between patient and provider, as well as the care delivered,” stated Shawn LaFrance, Executive Director of the Foundation for Healthy Communities. “Recognizing those efforts and sharing their success is the essence of this award, and we look forward to celebrating the future successes of others,” LaFrance continued. The innovative program was initiated by members of the hospital’s Patient & Family Advisory Council to connect trained Patient Family Advisors with patients in the hospital in a structured conversation to assess that patient’s current in-hospital experience. The real-time information or feedback regarding the patient’s experience is relayed directly to the hospital staff on that patient care unit by the Patient Advisor with the goal of addressing current needs and providing that patient with a better patient experience. Pictured L to R: Glen Lord, Foundation for Healthy Communities; Tanya Lord, Director, Patient & Family Engagement, Foundation for Healthy Communities; Barbara McGuire, Member, Patient & Family Advisory Council, Catholic Medical Center; Joseph Pepe, MD, President & CEO, Catholic Medical Center; Karen McLaughlin, Patient Liaison, Catholic Medical Center; Shawn LaFrance, Executive Director, Foundation for Healthy Communities; and Mary DeVeau, Chair, Foundation for Healthy Communities Board of Trustees. To read Catholic Medical Center's submission, please click here.
https://healthynh.com/index.php/about-us/recent-news/651-catholic-medical-center-recipient-of-2016-noah-lord-patient-family-engagement-award.html