content
stringlengths
71
484k
url
stringlengths
13
5.97k
University Hill is home to outstanding museums and art galleries. These venues frequently host outstanding local, regional, national and international exhibitions, performers and speakers. SUArt Galleries The SUArt Galleries is Syracuse University’s campus venue for the visual arts. Located in Sims Hall, the facility hosts a variety of temporary and permanent exhibitions throughout the year in its nearly 10,000 square feet of exhibition space. The department’s mission is to enhance the cultural environment of the University and the Syracuse area through meaningful educational experiences and encounters with the University’s permanent collection and traveling exhibitions. The Syracuse University Art Collections are now comprised of nearly 45,000 objects, housed in a temperature and humidity controlled area of Sims Hall. It serves as a repository for the art collections of the University; the care, maintenance, documentation, and interpretation of the works are its primary concern. Gallery Hours Monday: CLOSED Tuesday – Sunday: 11 a.m.—4:30 p.m. Thursday: 11 a.m.—8 p.m. *HOLIDAY / RE-INSTALLATION HOURS* The SUArt Galleries will be closed during University holidays, “green” days and during the re-installation of exhibitions. Please call the gallery for details (315) 443-4097. College of Visual and Performing Arts Syracuse Univeristy’s College of Visual and Performing Arts (VPA) showcases exhibits for the School of Art and Design, and serves the students, faculty, and staff of Syracuse University, as well as the members of the Syracuse community. Students have the opportunity to curate exhibits and maintain an active role in running exhibitions. The VPA also hosts visiting artists and speakers throughout the year. For a VPA Calendar of Events, click here. Community Folk Art Center The Community Folk Art Center specializes in exhibiting and promoting works by African American artists and artists from under-represented ethnic groups in Central New York. Founded in 1972, the CFAC moved to its current location at 2223 East Genesee Street in 2006. The CFAC is a program of the African American Studies Department at Syracuse University. The center houses the Herbert T. Williams Art Gallery (formerly known as the Community Folk Art Gallery), as well as facilities for workshops and forums. In addition to offering thought-provoking exhibitions, the CFAC also offers many educational programs, including visual arts workshops for children of all ages. Business Hours Tuesday – Friday: 10 a.m. – 5 p.m. Saturday: 11 a.m. – 5 p.m. Light Work & Community Darkroom Light Work is an artist-run, non-profit photography and digital media center supporting artists since 1973. Within the facility at 316 Waverly Avenue, is the Community Darkroom, a state-of-the-art, affordable public access photography and digital imaging facility. Additional Resources Th3 is a free, citywide arts open in Syracuse that takes place on the third Thursday of each month from 5 p.m. to 8 p.m. The project joins the most distinctive venues in the city, including those on University Hill, in a common event to bring the artistic experience to the public, providing them an opportunity to visit participating visual arts or cultural venues. Learn more at www.th3syracuse.com. Syracuse Arts Net, provides a directory and events listing of various art facilities across University Hill, Syracuse, and Cental New York at www.syracusearts.net.
http://university-hill.com/visit-the-hill/exhibits/
Created in 1992 by the Austin Historical Society, the Austin Historical Museum hosts permanent and changing displays of mining, ranching, railroad, Native American, and USS Lander artifacts. Objects on display are on loan or donated by citizens in and around Austin, Nevada. Additionally, the museum at times sponsors traveling visual arts exhibitions through the Nevada Touring Initiative – Traveling Exhibition Program. The Austin Museum is free and open to the public, but visitors should call in advance to make an appointment. Baker Great Basin Visitor Center & Artist in Residence Program The changing gallery at the Great Basin Visitor Center showcases artwork completed as part of the Darwin Lambert Artist/Writer in Residence Program. Through this program, artists live in Great Basin National Park for several weeks producing work and conducting workshops or outreach activities with the local community. For over a century artists have played an important role in the enjoyment of national parks by engaging people with these unique landscapes through the artist’s visual record. Permanent Wave Open Air Art Gallery Enroute to the Lehman Caves from Baker, NV is a unique open-air art gallery dubbed the "Permanent Wave". Since the 1990’s, local artists have been adding their own quirky sculptures including the Alien Ranger, The 1918 Essex with Power Steer-ing, The Footfall player, Sunglass Stabile, A C-D Character, The Tired Eye, The Par 3, and many others. Battle Mountain Battle Mountain Cookhouse Museum Housed in a restored 1920’s ranch cookhouse, the Battle Mountain Cookhouse Museum is the area’s first museum dedicated to the exhibition and preservation of regional history. In addition to displays of area artifacts, the Cookhouse Museum hosts traveling cultural exhibitions, traditional and contemporary art, and a diverse array of cultural events including poetry and author readings, storytelling, and artist talks. Beatty Beatty Museum & Historical Society The mission of the Beatty Museum and Historical Society is to preserve and protect the culture and history of Beatty, Nevada, the Bullfrog Mining District, and the surrounding Nye County mining districts. The museum houses over 10,000 artifacts, 3,000 historic images, an extensive collection of research files, including a small non-lending library. Research materials are open to museum visitors free of charge with a limited selection available through their online database. Boulder City Boulder City Art Guild, Inc. Boulder City Art Guild, an artists co-op, seeks to promote Arts Education with scholarship funds for qualifying students in Southern Nevada, and to provide venues for artists to exhibit and sell their work. The guild maintains a cooperative gallery space and puts on 2 two-day art shows (April & Nov.) each year. These Arts Festivals help raise the scholarship funds, as well as offering an opportunity for artists to exhibit and sell their works. Carson City Artsy Fartsy Located in downtown Carson City, Artsy Fartsy is a locally owned and operated art gallery representing over 58 Northern Nevada artists working in a variety of media from oils, acrylics, metal, glass, textiles, pottery, baskets, and jewelry. Brewery Arts Center Gallery & Artisan Store The Brewery Arts Center (BAC) is the local arts agency based in Carson City, NV. Located inside the BAC are the Artisan Store and Nevada Artist Association (NAA) Gallery. The NAA is an organization of over 100 Nevada artists, many of which exhibit at the gallery. Capital City Arts Initiative The Capital City Arts Initiative (CCAI) is an artist-centered organization committed to the encouragement and support of artists and the arts and culture of Carson City and the surrounding region. The Initiative is committed to community building for the area’s diverse adult and youth populations through art exhibitions, live events, arts education programs, artist residencies, and online projects. CCAI presents visual art exhibitions in three sites in downtown Carson City: the Brick Gallery, the Courthouse Gallery, and the Sierra Room. Charlie B Gallery Charlie B Gallery offers an exceptional collection of contemporary pottery from Nevada and Western American artists as well as historic studio pottery, historic Nevada fine art, and vintage objects. Comma Coffee Backseat Gallery Located across the street from the Nevada State Legislature in downtown Carson City, the gallery at Comma Coffee hosts month long exhibitions by regional artists with artist receptions the first Wednesday of every month. Nevada Arts Council The Nevada Arts Council’s (NAC) mission is “to enrich the cultural life of the state through leadership that preserves, supports, strengthens and makes accessible excellence in the arts for all Nevadans.” The agency’s programs serve as a catalyst to stimulate artistic, cultural and economic activity across the state, animate its breadth of communities, ensure lifelong learning in the arts for all Nevadans, and to encourage public and private support for the arts. NAC manages two visual arts exhibition programs in the state capital Carson City: Nevada State Museum, Carson City The Nevada State Museum, Carson City engages diverse audiences in understanding and celebrating Nevada’s natural and cultural heritage. Permanent exhibits focus on natural history, science, and Nevada history with changing displays incorporating art, research, and history to highlight aspects of Nevada culture or commemorate important events and figures in Nevada’s past. Western Nevada College, Carson City Campus Galleries Western Nevada College inspires success in our community through opportunities that cultivate creativity, intellectual growth and technological excellence, in an environment that nurtures individual potential and respects differences. The Western Nevada College Carson City galleries feature the work of regional professional and student artists working in a variety of media. All galleries are open to the public with free admission. Elko Elko County Art Club & Gallery The Elko County Art Club (ECAC) supports and nurtures local artists by providing opportunities for socializing with other artists, pursuing artistic growth, exhibiting and selling their art, and developing the appreciation of art in the Elko community. The ECAC Gallery displays and sells original artwork and crafts created by its member artists. Through the gallery’s Featured Guest series, Nevada artists are invited to conduct workshops and share their process and skills with the Elko community. Great Basin College Art Gallery The Great Basin College Art Gallery showcases the work of Nevada and student artists in two semester-long exhibitions per year. Through these displays, the gallery aims to expose the community to diverse experiences, cultures, and viewpoints, as well as provide access to the arts and foster a spirit of inquiry, creativity, and reflection. Exhibitions run for 12 weeks and are complemented with an artist talk, workshop, or other outreach activity. Northeastern Nevada Museum Located in Elko's City Park, the Northeastern Nevada Museum strives to be the premier cultural and historical center for preservation, research and education in northeastern Nevada. The museum maintains a quality facility dedicated to serving the area and its visitors by providing comprehensive archives and collections, diverse history and art exhibits, research facilities and a community gathering space. With over seven galleries inside the museum, there is something for everyone. For more information on current and upcoming exhibitions, please visit their website. Western Folklife Center Wiegand Gallery The Western Folklife Center is a non-profit organization working to expand understanding by celebrating the every-day traditions of the American West. The Wiegand Gallery features interactive exhibitions and multimedia presentations, including semi-permanent and temporary displays drawn on the Center's own collections or traveling exhibitions. Over its 25-year history, the Center has produced numerous educational exhibits based on its original research and fieldwork in traditional art forms. Ely Ely Art Bank The downtown Ely Art Bank is a visual arts gallery and cultural center that houses a permanent collection of painting, sculpture, and photography depicting the Great Basin Area and White Pine County. The represented artworks span over 70 years and showcases generations of artists expressing life in the Great Basin. Additional exhibitions include local artists’ work for sale as well as traveling visual arts exhibitions through the Nevada Touring Initiative – Traveling Exhibition Program. Eureka Eureka Courthouse Gallery The Eureka County Court House Gallery is located across the street from the Historic Eureka Opera House on Main Street. The Courthouse Gallery is a frequent sponsor of traveling visual art exhibitions through the Nevada Touring Initiative – Traveling Exhibition Program. Fallon Churchill Arts Council - Oats Park Arts Center For over three decades the Churchill Arts Council (CAC) has brought high quality arts and culture events to northern Nevada in the form of exhibitions, performances, film screenings, readings, and conversations. In addition to performing and literary arts programming, the organization hosts around six curated visual art exhibitions and artist conversations per year alongside their outstanding contemporary art collection on permanent display. Western Nevada College, Fallon Gallery The art gallery at the Western Nevada College Fallon campus is located in Virgil Getto Hall, just outside the art studio, room 312. The gallery features both professional exhibits and student work, rotating throughout the year. Western Nevada College is also a host and sponsor of traveling visual arts exhibitions through the Nevada Touring Initiative – Traveling Exhibition Program. Gardnerville East Fork Gallery The East Fork Artist Gallery is a non-profit cooperative providing space for regional artists to exhibit and sell their original, high quality artwork. Established in 1979, the gallery currently features work from over thirty consigners and twelve active members. An annual Christmas reception is held the first Sunday each December and a special Anniversary reception is held each July. Gadzooks! Offering fine arts, handcrafted jewelry, collectibles, creative furnishings, and vintage objects from regional member artists and consignors. Gerlach Planet X Pottery Established in 1974, Planet X started within the remnants of an old homestead on the Emigrant Trail between the Smoke Creek and Black Rock deserts of Northern Nevada. Over the course of the years, the space has developed a working pottery studio and four galleries. The facility operates completely off the grid, about 8 miles from the nearest power source, and utilizes solar, propane, and a generator for electricity. Planet X offers a variety of porcelain, stoneware, and Raku for purchase in a middle-of-nowhere destination.
http://www.arts4nevada.org/organizations/visual-arts-presenter
What does it mean if my work is nominated? Nominated works are those that have been identified through a rigorous examination process as excellent examples of students' investigations of artmaking practice. The ARTEXPRESS exhibitions are curated from this pool of works. When will I be notified of selection? Letters advising that works have been nominated are sent to students and schools immediately following the last written HSC examination. Students are notified of their selection via post or email if available in early December. Why didn't my nominated work get into ARTEXPRESS? Every year more works are nominated for ARTEXPRESS than can be exhibited in the different venues. The selection of artworks for each of the ten ARTEXPRESS exhibitions reflects the candidature for the Visual Arts examination including gender, regional and metropolitan representation, and the inclusion of all expressive forms. Other considerations also include the size of galleries, and the relationship between different bodies of work and a body of work's suitability to withstand exhibition conditions. Where do I deliver my work? If a body of work has been itinerantly marked at your school rather than at the marking centre in Homebush, it will usually be sent to ARTEXPRESS by a courier arranged through the Board of Studies. However, arrangements can be made for students to deliver their works personally. Do I need to send in my VAPD if my work is nominated? In general your Visual Arts Process Diary (VAPD) is NOT required for ARTEXPRESS exhibitions, however if it is required you will be notified of the due date. Some galleries request to display VAPDs along with bodies of work. If your VAPD is required ARTEXPRESS will contact you by phone. When is the last date for copyright clearance? Students need to be aware of the difficulty in gaining copyright clearance for some commercially produced music tracks and film clips for display purposes outside the examination process. Copyright needs to be finalised as soon as possible during the making of your body of work. Forms for copyright clearance need to be submitted to ARTEXPRESS with your publicity release forms. ?Please note if copyright is not cleared by mid December your work may not be able to be included in any ARTEXPRESS exhibitions. Who determines in which gallery my work is exhibited? Exhibitions are chosen by curators to form a comprehensive exhibition that showcases all expressive forms. The bodies of work are selected to compliment each other and work as an entire show. Consequently the choice of venues is not negotiable. I am not part of the exhibition at the Art Gallery of New South Wales. Does this mean I am not in ARTEXPRESS? No. ARTEXPRESS refers to a series of up to ten metropolitan and regional exhibitions across New South Wales. The exhibition at the Art Gallery of New South Wales is just one element of the ARTEXPRESS program. How many people can come to the opening? This depends on the venue. Generally the Art Gallery of New South Wales is limited to 2 guests per exhibitor. Check the invitation that you receive to the opening for details. What should I wear to the opening? Smart casual attire is suitable for all ARTEXPRESS openings. I need my work for a local exhibition or a portfolio presentation for a course entry next year. Can I get my work? If your body of work is required for exhibition or portfolio submission arrangements can be made for it to be picked up personally. This will not affect your selection to ARTEXPRESS. When will my work be returned? You will notified by ARTEXPRESS What happens if my work gets damaged? ARTEXPRESS makes every effort to ensure that artworks travel to and from their exhibitions with the utmost of care. However students are encouraged to organize their own insurance for their artworks. In the case of any damage, students will be contacted directly by ARTEXPRESS staff. ARTEXPRESS is not liable for any damage to an artwork.
https://artexpress.artsunit.nsw.edu.au/faq.php
Although Ulaanbaatar city is the center of Mongolian politics and economy, it is also the heart of Mongolian culture and contemporary art. Here are Ulaanbaatar’s top recommended art galleries worth visiting. MONGOLIAN NATIONAL ART GALLERY The Mongolian National Modern Art Gallery, established in 1991, is located in center of the city, Ulaanbaatar’s Palace of Culture in the historic Sukhbaatar Square. As the leading Art Gallery in Mongolia, the gallery hosts regular exhibitions of both classic and contemporary artwork and owns a permanent collection of paintings, sculptures, prints, crafts and other new forms of art and artifacts which possess originality and a superior Mongolian thematic identity. The MNMAG is part of a network of local and international organizations and individual experts in the modern art field. Art lovers wanting to learn about Mongolian art should head to the Mongolian National Art Gallery. Address: Central Cultural Palace-B, Sukhbaataar Square-3, Ulaanbaatar Phone: 11-331687, 99022467 Website: http://www.art-gallery.mn/en 976 ART GALLERY 976 Art Gallery was established in 2012, with the aim to introduce the best contemporary artists of Mongolia. Driven by its ambition, the gallery has been actively organizing unique and interdisciplinary art exhibitions, performances and public discussions, collaborating with critically acclaimed and leading contemporary artists of Mongolia, such as Enkhbold Togmidshiirev, Baatarzorig Batjargal, Munkhbolor Ganbold, Munkhtsetseg Jalkhaajav, Davaajargal Tsaschikher, Bayartsetseg Dashdondov, Odonchimeg Davaadorj, Nomin Bold, Uuriintuya Dagvasambuu, Gerelkhuu Ganbold, and many others. The gallery has a growing international profile. Its recent exhibitions have featured works by internationally known artists, including Walter Riedwig (Switzerland), Mauricio Dias (Brazil), Nathalie Daoust (Canada), Christian Faubel (Germany), Mirian Kolev (Bulgaria), also collaborated with Taiwanese group Outsiders Factory. Today 976 Art Gallery has become an important place for the cultural and artistic life of Ulaanbaatar city and gained recognition as a leading contemporary art space in Mongolia. Address: 1st floor of Choijin Suite Building Jamyan Gun Street, Sukhbaatar district (next to ICC Tower) Ulaanbaatar, Mongolia Phone: 94051127 Hours: Mon - Sat 11:00 AM - 19:00 PM Facebook: https://www.facebook.com/976ArtGallery/ RED GER ART GALLERY The Red Ger Art Gallery is launched in 2002 with the goal of to promote Mongolian visual art locally and internationally while nurturing and supporting new, emerging talent. The Gallery hosts regular shows and you can purchase all the artworks on the exhibitions which are all for sale. The gallery has also exhibited western modern art, as it did with Norman Rockwell’s America, a show jointly organized by the Arts Council, the Norman Rockwell Museum and the US Embassy in Mongolia to celebrate the 25th anniversary of diplomatic relations between Mongolia and the US. Red Ger Art Gallery is the one of UB’s best for modern artwork by Mongolian top contemporary artists. Address: Tourist street, Delta center 4th floor, room 402 Ulaanbaatar, Mongolia Phone: 11-319015 Facebook: https://www.facebook.com/RedGerArtGallery/ MARSHAL ART GALLERY The Marshal Art Gallery opened their door to the audience in May 2013. Marshal Art Gallery contributed vital roles in promoting and supporting art creation in Mongolia with their modern and stylish exhibitions which showcases the works of both local and international artists. They often organize specific events and programs of arts education and arts-related social activities. You may not become artists but you can feel and taste of artists life and work through their events. Their previous event named wine & canvas engaged huge enrollments for arts of UB.
https://visitulaanbaatar.net/p/68
Local museums and galleries are constantly acquiring work. Most often, these pieces flow into the collections of our larger museums and galleries with very little fanfare, seeping into curated exhibitions over many decades. But as budgets for more ambitious thematic exhibitions tighten, more institutions are turning to displays of recent acquisitions. (See: "50 for 50" at the Burchfield Penney Art Center.) One of these showoff shows, "Light, Line, Color and Space," opens Feb. 3 in the University at Buffalo's Anderson Gallery. It will display highlights from the UB Art Galleries' last five years of acquisitions, featuring an eclectic range of painting, photography and sculpture. "Presenting the work within the prescribed categories of light, line, color and space, the exhibition showcases connections across generations and media," a release from the gallery reads. These artificial divisions, the release continues, opens up opportunities for new scholarship. Highlights include freshly acquired work by the revered local educator Harvey Breverman, the late Buffalo painter Adele Cohen, Rochester-based conceptual photographer Carl Chiarenza, and work by Robert Rauschenberg, Ken Price, Antoni Tapies and John Cage. "Light, Line, Color and Space": Opening reception at 6 p.m. Feb. 3 in the University at Buffalo Anderson Gallery (1 Martha Jackson Place), and runs through April 15. Admission is free. Call 829-3754 or visit ubartgalleries.buffalo.edu.
https://buffalonews.com/2018/01/30/ub-anderson-gallery-shows-off-new-art-in-light-line-color-and-space/
The City of Lafayette Public Art Committee invites artists living or working in the Bay Area to submit proposals for exhibitions in the Library Art Gallery located at the Lafayette Library & Learning Center, 3491 Mt. Diablo Blvd., in downtown Lafayette. The Library Art Gallery at the LLLC was created to support rich and diverse artistic expression and to encourage the appreciation of the visual arts in the community. It provides an exciting opportunity for local artists to display their work in Lafayette’s cultural center. Each year the Public Art Committee curates four exhibits per year, in addition to hosting Project LPIE (Lafayette Partners in Education), an annual event that showcases and acknowledges Acalanes High School students for their photography and visual arts projects. APPLICATION INFORMATION Who may exhibit? Individuals and non-commercial groups based in the Greater Bay Area. Priority consideration is given to those in Contra Costa County. What are the expectations? - There is no cost to apply for review or to exhibit in the LLLC Library Art Gallery. - Artists are expected to assist in the installation and de-installation of the exhibition. - Artists are asked to attend and promote an opening or closing reception hosted by the Public Art Committee. How do I apply?Exhibitors interested in displaying artwork in the Library Art Gallery must submit a complete a LLLC Exhibit Application Packet and executed Art Gallery Release to the City of Lafayette, 3675 Mt. Diablo Blvd., Suite 210, Lafayette, CA 94549 or by email to Public Art Staff Liaison, Jenny Rosen, at [email protected]. Completed applications will be reviewed by the Public Art Committee. The committee meets on the first Wednesday of each month.
https://www.lovelafayette.org/city-hall/commissions-committees/public-art-committee/call-for-artists
As Programs Manager for the Art Gallery, Stacey: - Organizes programs, tours and other experiences related to Art Gallery exhibitions and collections - Engages the Art Gallery’s online community through social media - Coordinates Grand Valley State University’s participation as an ArtPrize© Venue Stacey joined the GVSU Art Gallery in 2013 following zir time as Program Coordinator with the American Museum of Magic. Previously Stacey was Collections Manager at the Holland Museum and a curatorial assistant at the Public Museum of Grand Rapids. An active member of the Michigan Museums Association, Stacey has served on the MMA’s Programs Committee and serves on the Conference Leadership Team. Enthusiastic about service and leadership, Stacey is honored to have studied at the “U of Z” while working with the Zingerman’s Community of Businesses, Ann Arbor, MI and served on Grand Valley State University's AP Professional Development Subcommittee. Stacey holds a B.A. in Public History with an emphasis in museum studies from Western Michigan University, Kalamazoo, MI and is seeking a Master of Public Administration degree at Grand Valley State University. Alison Christensen Project Manager [email protected] LinkedIn Profile Alison received a B.F.A. from Grand Valley State University in 2004. Upon graduation, she started working at Lafontsee Gallery, Grand Rapids, MI. In 2007 she began working for Jeffery Roberts Homes, Inc. as an Interior Design Assistant/ Project Manager. Quickly after graduation she became very involved in the local art scene as a volunteer, artist and exhibition designer. She was a board member for the DAAC (Division Avenue Arts Cooperative), and still is a volunteer for various Avenue for the Arts events. Alison joined the Art Gallery in 2009 as a Preparator for the rotating and permanent exhibitions of Grand Valley State University’s extensive art collection. In 2015, Alison became the Art Gallery Project Manager, working on unique projects like a Mathias Alten hard cover book and fulfilling art requests for the GVSU faculty and staff. Alison is currently pursuing a Masters in Philanthropy and Nonprofit Leadership degree at GVSU. Jenniffer Eckert Art Gallery Assistant [email protected] Jenniffer has over 17 years’ experience working in the administrative field, most recently having worked for the Naval Surface Warfare Center, Panama City Division, in Panama City, Florida, as an administrative assistant and human resources assistant. Prior to that position, Jenniffer has worked as a financial manager, business manager, assistant manager, in addition to other various administrative positions. Jenniffer is planning to finish earning her MA at Grand Valley in the Management field and is more than half-way there! As a single mother of three children and two dogs, Jenniffer stays very active outside of work attending her children’s extracurricular activities as well as various other motherly duties. Jenniffer was born and raised in SE Grand Rapids and currently resides in Jenison, Michigan. Nathan Kemler Assistant Director of Galleries and Collections [email protected] LinkedIn Profile Follow on Twitter As Assistant Director of Galleries and Collections for the Art Gallery, Nathan: - Assists in providing leadership, vision, and strategic direction for the department, and serve as a mentor to staff, students, and the greater museum community - Plays a lead role in long-range departmental strategic planning efforts, in support of the university’s plan - Serves as an institutional liaison and promote long-term relationships with supporters, faculty, students, staff, donors, philanthropists, and the public - Directs and manages the development of the collection management database, and the subsequent online visual learning platform for collection accessibility Nathan currently serves as the vice president on the Michigan Museums Association board of directors and as a peer reviewer for the American Alliance of Museums Assessment Program in the areas of Collection Stewardship and Community Engagement. He has previously served as a board member to the American Museum of Magic and on committees for the Michigan Museums Association in the areas of professional development and programs. Before his current appointment at GVSU started in 2008, he was the Project Curator of Ethnology at the Public Museum of Grand Rapids and Curator of Historic Sites and Collections at the Holland Museum. Nathan received his B.A. in World History with minors in Art History, Archeology, and Studio Art from Calvin College and received his M.A. from the Historical Administration graduate program at Eastern Illinois University while being awarded the History and Technology assistantship. Nathan is a servant leader in the region and specializes in collection accessibility and community engagement initiatives. Dru King Preparator [email protected] Dru King has been a preparator at Grand Valley State University’s Art Gallery Department since 2003. He earned his BFA at GVSU in 2001 and his MFA at Kendall College of Art and Design in 2009. After receiving his MFA, Dru taught drawing courses at Kendall and at Grand Valley. His responsibilities at the Art Gallery include matting and framing of artwork as well as exhibit and building installations. He is also a local artist that focuses on painting and drawing. Examples of his artwork are on permanent display in the GVSU collection and at the Betty Van Andel Opera Center in Grand Rapids. Henry Matthews Director of Galleries and Collections [email protected] As Director of Galleries and Collections, Henry Matthews organizes exhibition and related diverse education programs, manages the university's extensive art collections and seeks ways to connect the university to art communities around the world. He was formerly Director of the Muskegon Museum of Art and Staff Member of the Detroit Institute of Arts. A native of Austria, Matthews has conducted dozens of world wide tours over the past 25 years. Nicole Webb Collections Manager [email protected] As Collections Manager for the Art Gallery, Nicole: - Manages the over 15,000 works of art part of the GVSU Collection that is on display in every GVSU building and stored within art gallery storage. - Maintains the online database of collection records. - Assists with installation of the permanent collection throughout campus and exhibits. Nicole is a Grand Valley State University Alumna, graduating in 2009 with a bachelor's degree in both art history and anthropology. She then completed her master's degree in Historical Administration at Eastern Illinois State University in Charleston, Illinois. Nicole then made a cross-country trip to work as the Curator of Collections at the Historical Museum at Fort Missoula in Missoula, Montana where she managed the exhibits and a permanent collection of over 40,000 artifacts. After enjoying the mountains for over 7 years- she and her family packed up and came back home to the beautiful Great Lakes of Michigan to re-join the GVSU Art Gallery staff as Collections Manager in 2018. Joel Zwart Curator of Exhibitions [email protected] LinkedIn Profile Follow on Twitter As Curator of Exhibitions for the Art Gallery, Joel: - Manages exhibition development, design and installation at all university galleries - Curates exhibitions with strong interdisciplinary and multicultural themes - Assists with placement and installation of the permanent collection throughout all university buildings and outdoor spaces - Oversees the design and production of materials related to exhibitions and special projects Joel has over 20 years of experience in the museum field, having organized more than 200 exhibitions. He joined the GVSU Art Gallery team in 2016 after serving for 14 years as the Director of Exhibitions at Calvin College. In that role, he oversaw the development and opening of the new Center Art Gallery, managed the (106) Gallery from 2006-16, and led the re-organization of the college’s permanent collection of art. He also served as the Director of Education at the Holland Historical Trust, and has been an independent curator, consultant and exhibition designer. Joel received his B.A. in History with minors in Art and French from Calvin College, and received his M.A. in Historical Administration from Eastern Illinois University, alongside an assistantship at The Tarble Arts Center. He continues to serve on various committees at museums and non-profit organizations.
https://www.gvsu.edu/artgallery/meet-the-staff-13.htm
Art Beat Miami is an experience of art, cultural exchange, food & music inspired by Haiti and artists worldwide. During Art Basel Week, the Little Haiti Community invite you to discover multidisciplinary works of art by internationally recognized artists including renowned Haitian painter Jude Papaloko and sculptor James Mastin at the Little Haiti Cultural Center, Caribbean Marketplace and local galleries throughout the Northeast Second Avenue corridor. Enjoy LIVE music, food, mural exhibitions, special events, and conversations with artists. ____________________________________________________ Little Haiti Optimist Foundation and Northeast Second Avenue Partnership (NE2P) invites you to participate in the 4th annual Art Beat Miami during Miami Art Week (Art Basel Miami Beach), December 6th – 10th located inside the Caribbean Marketplace at the Little Haiti Cultural Complex. Art Basel Miami Beach has become an international mecca for art lovers, attracting visitors from across the globe. In its 15th year, this art fair showcases artwork and galleries from countries alla over the world drawing more than 100k visitors each year. Art Basel acts as a catalyst, spawning special exhibitions at museums and galleries across the city. Satellite locations, including Miami Beach, Little Haiti, Midtown, the Design District and Wynwood, transform the city into a dense and dynamic cultural hub for the week. SCHEDULE:
http://artbeatmiami.com/about-us/
Nicknamed the Athens of South America for its numerous universities and libraries, Colombia’s capital is also home to a plethora of art galleries and museums exhibiting Colombian and international art. Next to the traditional contemporary galleries, many curators strive to distinguish themselves from other cultural activities or particular specializations. Here we take a look at Bogotá’s best contemporary art galleries. Thierry Harribey, a Frenchman passionate about art, decided to restore his old, colonial home, converting it into a small, independent art gallery. A few months later, in February 2012, Neebex opened its doors. Located in the city’s historical neighborhood of La Candelaria and surrounded by four universities, a library, and many museums, Neebex has developed an academic, cultural and touristic atmosphere. The gallery proudly hosts all types of artistic works from videos to performances and puts a special emphasis on promoting young talented Colombian artists still unknown to the general public. The collective, thematic exhibitions invariably show the exciting side to Colombia’s new art. Located in a commercial area, Cero Galería features visual, plastic, multimedia, and architectural arts, but specializes in video art. According to its director, Leonor Uribe Joseph, the gallery is also a platform for managing and conceiving socio-cultural projects, often through collaborations with public and private organizations. Cero Galería was inaugurated in October 2007 and focused its first exhibitions on video art, such as the II Inter-American Biennial of Video Art of 2006-2007, which consisted of works selected in the competition organized by the Cultural Centre of the Inter-American Development Bank. In April 2013, the gallery organized an exhibition called La Generación Emergente, gathering eight young artists from Bogotá and Medellin who were prominent in the contemporary art scene but had not yet won any prizes. Galería Baobab opened in 2002 with the aim of supporting well-established artists, but also exhibiting and selling the work of emerging, young Colombian practitioners. Without any particular specialization or limiting preference, Baobab welcomes all types of visual art. It intends to play a role on the national as well as the international stage, which is why it has participated in important international fairs, such as the MIA (Miami International Art Fair) in Miami, the first edition of the Art Naples fair in 2011, and the Next Art Fair in Chicago in 2010. Galería Baobab is located in a small, isolated street, nicknamed la calle de los anticuarios for the numerous antique shops that dominate it. Part hairdressing salon, part gallery, La Peluquería defines itself as a social space for emerging art and artistic hairstyling. In 2008, Melissa Pérez and Maritza Alvarez bought an old house in La Candelaria and refurbished it to create a unique atmosphere: an empty gold frame hanging on a wall decorated with a large black and white drawing, a cow skin chair in the middle of the room, and other vintage decorative items, such as a big, flashy red Coca Cola fridge. This self-confessed ‘alternative space’ is now home to the Peluqueras asesinas, nine women bound by a vision of hairstyling as an artistic practice. They organize free haircuts once a week and regularly hold cultural events to reach out and spread the word about their art. LaLocalidad is an alternative space, where art exhibitions are complemented by cultural, social, and artistic activities. The building’s minimalist, industrial design provides a contrast to the diversity and originality of the activities that take place within. The complex is a centre for socio-cultural practice and personal well-being, featuring Step Ahead, a fitness studio offering yoga courses, the Blue Moon Chocolate selling handcrafted chocolates, SHAKE IT, a bar serving milkshakes and freshly ground Colombian coffee, an 800m² room for exhibitions, and a floor for artistic workshops with Ángela Aristizabal and Danilo Rojas. It showcases art media and hosts a great variety of events, such as film screenings, official book and music album launches, or conferences related to art and culture. Located on the first floor of the Museo de Arte del Banco de la República, El Parqueadero was initially meant to serve as the museum’s car park. However, with its experimental design being the perfect backdrop to stimulating artistic creativity, the floor was converted into something completely different and innovative: a unique cultural venue. In addition to a space for exhibitions, El Parqueadero also defines itself as a laboratory for workshops and artistic productions. Relying on the public’s participation, active collaboration, and knowledge-sharing, this cultural center is home not only to numerous projects and artistic gatherings but also to a library where books and publications may be freely consulted. AlcorrienteARTE is a gallery located in Quinta Camacho, a neighborhood famous for its English style. The gallery regularly hosts exhibitions but also conferences related to art, music, dance, and theater. Monthly events are organised to favour the encounters between artists and the general public, and more particularly with collectors. Through its programme, AlcorrienteARTE aims to support and promote all types of talent, from young emerging practitioners whose work has never been exhibited to more experienced artists. Past exhibitions have included thematic shows on memory and architectural drawing, or mixed media artist Marcos Roda’s exhibition Scrambled Times. ALcorrienteARTE also gives part of the funds it raises to social works in order to contribute to the development of the city. Galería MÜ is the first gallery in Colombia entirely dedicated to fine art photography. According to Andrew Ütt, co-director of the gallery, photography has a rich history in Colombia. Galería MÜ aims to contribute to this heritage through its artistic presentation of photography while showcasing Colombians’ beliefs. Wishing to share their passion, Andrew Ütt and Carolina Montejo – the other co-director – organize workshops on the history of photography, while Francisco Cruz Florez, a Colombian photographer, has conducted studio classes on basic techniques of the art form. The shop attached to the gallery specializes in photography books and prints. Due to space restrictions, Galería El Garaje only focuses on photography, drawings, and paintings. Dedicated to promoting exclusively young talent, Galería El Garaje has a set of strict rules regarding each artist that it represents. They must either be a student, enrolled in an art school and in their final year of university, or a young professional artist. Since its opening in 2004, the gallery has functioned as a springboard for young artists wishing to start a career in the field. Constantly looking for new talent, Enrique Soto, the director, visits universities to meet students, selecting artists with two criteria in mind: quality, and the method of production.
https://theculturetrip.com/south-america/colombia/articles/the-10-best-contemporary-art-galleries-in-bogota-exploring-colombia-s-creativity/
University Gallery’s contribution to scholarship recognised by HEFCE The Stanley and Audrey Burton Gallery has been awarded a £50,000 share of HEFCE funding for HE museums and galleries which make a significant contribution to research and scholarship. The funding was agreed following an independent review by experts from the museum and higher education sectors, chaired by Diane Lees CBE, Director-General of the Imperial War Museums. The application process was highly competitive, and the panel noted the outstanding quality and compelling evidence provided in the submissions. Diane Lees said: As a panel, we found a truly inspiring array of case studies which demonstrated the range of research that university museums, galleries and collections carry out. The total funding requested exceeded the total funding available, and the quality of the submissions did not make this an easy process. University Librarian Stella Butler said: We are delighted to be receiving HEFCE funding. This support will enable us to share our wonderful collections with communities and individuals beyond the campus. Academic colleagues work with us to prepare our exhibitions and events enriching the cultural landscape of Leeds and West Yorkshire. The Gallery, which is open to the public, hosts both the Universitys exceptional art collection and innovative temporary exhibitions. Past exhibitions have included a major retrospective of Maurice de Sauzmarezs work, opened by his former student Sir James Dyson, and a yearly exhibition that showcases the work of the top students from the Universitys School of Design and School of Fine Art, History of Art and Cultural Studies. The gallery also cares for the Universitys successful Public Art programme, which includes the recent loan of Barbara Hepworths Dual Form and the re-instatement of Hubert Dalwoods Untitled Bas-Relief. The events programme takes inspiration from the art collection and exhibition on display, previous events have included guest lectures, artist workshops and regional and national events such as Light Night and Museums at Night.
https://forstaff.leeds.ac.uk/news/article/5512/university-gallery-s-contribution-to-scholarship-recognised-by-hefce
The Center for Visual Arts – Lee Gallery is pleased to announce the opening of the “Restrain, Resensitize,” exhibition Monday, March 31 with an artist talk and closing reception planned for Friday, April 11, 6 p.m. – 8 p.m. The exhibit showcases the M.F.A. creative research and final thesis of printmaking graduate student, Adrienne Lichliter. Adrienne Lichliter uses a unique printing process, implementing techniques with wood and copper, allowing for the natural media, the grain, notches, and wearing of the wood as well as the patina finish of the metal, to surface in the work, creating a swarming dynamic of textural intricacies and depth of delicate mark making. She allows the visual effects of the reactive material to work with and against the sensitive marking of her hand to create a conversational push and pull of visual focus and to highlight the rich and quiet vibrancy of the medium. The artists writes, “The artwork hovers between paradoxes: dissolve and formation, density and void, focal point and dispersion, accident and intention.” By enhancing the viewer’s experience of the essence of the medium in a “modest and restrained aesthetic,” the emphasis shifts from a traditional object-subject matter to a relinquished revelry of the spontaneity and honesty in the mark of the artist’s hand. This is Lichliter’s sincere directive, “[that] there is potency in something that can be comfortably indecisive and unclear. With art that resists assertion and clarity, I hope to re-sensitize the viewer.” The visual production is a small part of the research and creative development pursued in the two and a half year of a students graduate study in pursuit of an Master of Fine Arts Degree at Clemson University. Students explore concepts, purposes of intent, art historical discourse, personal histories, and new processes, creating a conceptual foundation for their visual work. For the artist, this is an ever evolving and essential process in their creative research. The public is invited to join the conversation by attending the artist talk, scheduled for Wednesday, April 9, 2:30 – 3 p.m. and again Friday, April 11 at 7:00 p.m. during the closing reception for the exhibit. The exhibit runs March 31 – April 11 at the Lee Gallery with an artist reception scheduled for Friday, April 11, 6 pm – 8 pm. The Lee Gallery at Clemson University hours are Monday through Friday, 9 a.m. – 4:30 p.m. Student exhibits, artist receptions and talks are free to the public and hosted by students and their families. For more information about the exhibit contact Lee Gallery Exhibits Preparator, Jac Kuntz at [email protected]. About the Art Galleries and Exhibits at the Center for Visual Arts – Clemson University There are several galleries on and off campus maintained by the Center for Visual Arts through the Lee Gallery and Center for Visual Arts – Greenville. Exhibitions on and off campus provide the University and surrounding community with access to regional, national and international visual arts and artists. The Lee Gallery and CVA-Greenville also provides programmatic offerings such as artist presentations, guest speakers, walking tours, and special events designed to introduce audiences to creative research, influences and ideas being explored by artists showcased in the galleries. At the end of each semester the Lee Gallery showcases artwork of undergraduate and graduate students enrolled in the Department of Art academic program. Students are required to present a final thesis of their creative research in a professional exhibition format as part of their degree fulfillment. Artists included in exhibitions are asked to deliver a public presentation about the content, inspiration and historical context of their work to the general public. Artists’ presentations serve to provide the community with an access point for understanding artistic research practice and individual motivations for creating visual art. Galleries, special exhibits, artwork and/or showcases can be found on the main Clemson campus in our flagship Lee Gallery located in Lee Hall I as well as the Acorn Gallery in Lee Hall II. Throughout campus visitors can also enjoy exhibits showcased at the College of Architecture Arts and Humanities Dean’s Gallery in Strode Tower, Sikes Hall Exhibit Showcase in Sikes and the Brooks Center for Performing Arts. Gallery showcases off-campus can be found at the Center for Visual Arts – Greenville located in the Village of West Greenville, the International Center for Automotive Research (CU-ICAR) in Greenville, the Charles K. Cheezem OLLI Education Center in Patrick Square and The Madren Center at the Conference Center and Inn both in Clemson.
https://blogs.clemson.edu/visualart/2014/03/31/restrain-resensitize-mfa-student-exhibition-on-display-in-the-lee-gallery/
INTERMEDIATE STAGE: This room presents a problem for most people, even those that have beforehand executed an Escape Room. The Birke Artwork Gallery is Marshall College’s on-campus show of the School of Art &Design’s gifted students, in addition to professional exhibitions. Take a look at opening night pictures from Saturday’s opening after the bounce. This page was final edited on 13 July 2017, at 02:forty seven. Curators often create group shows that say something a few certain theme, development in artwork, or group of related artists. The Scholar Artwork Exhibition showcases more than 200 artworks. RAW Indianapolis is an independent arts organisation whose mission is to supply an alternate thought of artwork throughout all genres of inventive experimentation; together with impartial movie, style, music, visual art, performing art and sweetness. They create relationships of dominance and which means between topics and objects, modifying our cognitive processes and the symbolic relationships we create with the environment. There are a number of online artwork catalogues and galleries which were developed independently of the assist of any individual museum. The Arts Council of Indianapolis can also be accountable for Public Art Indianapolis, town’s public art programme, and presents an up-to-date calendar of arts exhibitions, performances, and occasions in central Indiana. The work appears to be a messy web of a whole lot of tangled wires by which sounds journey, following an algorithm of synthetic life. That dedication is found within the lecturers, mentors, college students and visiting artists at MCC. Gallery Director, Beth Shadur, recommends programming and often acts as curator for shows. International Centre for the Study of the Preservation and Restoration of Cultural Property (ICCROM): an intergovernmental organization devoted to the conservation of cultural heritage.
https://www.volumehaptics.org/cleve-carney-art-gallery.html
A new exhibition at the Sue and Leon Genet Gallery explores the role that the online marketplace Etsy plays (or will play) in the art world through what is considered “fine art.” “Caveat Emptor: Etsy in the Art World” aims to present the historical prevalence and popularity of mass-produced objects, as well as how online platforms such as Etsy offer a departure from traditional work made for the masses. Curated by Molly Wight ’22, a museum studies graduate student in Syracuse University’s College of Visual and Performing Arts (VPA), the exhibition is the culmination of independent study and research that showcases several artists including Andy Warhol, Winslow Homer, Japanese woodblock prints and selections from the curator’s personal collection. The exhibition will be on view at the Sue and Leon Genet Gallery, located on the first floor of the Nancy Cantor Warehouse. 350 W. Fayette St., Syracuse, from March 4-April 3. A reception will be held Monday, April 11, from 5-7 p.m. Gallery hours are Monday-Friday, noon-5 p.m. or by appointment. “Caveat Emptor” examines not only how artists who market their artwork on Etsy interact with the art world, but also the precedent for mass-produced art and how Etsy art shares similarities but also has differences from types of art like ukiyo-e prints and Alphonse Mucha’s Art Nouveau posters. Artwork that is marketed on Etsy is contemporary art in that it is created by living artists, but it is very different from the kind of contemporary art that most museums collect. Based in VPA’s School of Design at the Nancy Cantor Warehouse, the Sue and Leon Genet Gallery is a student-managed space hosting exhibitions from the school’s students, faculty and alumni. Programing seeks to engage the University and downtown Syracuse community with exhibitions inspired by and related to the field of design.
https://vpa.syr.edu/new-genet-gallery-exhibition-explores-etsys-role-in-the-art-world/
As part of our exhibition program, the RIC publishes photo books in collaboration with a variety of partners and sponsors. The most recent addition is The Faraway Nearby: Photographs of Canada from The New York Times Photo Archive, published in partnership and distributed worldwide by Black Dog Publishing. This catalogue accompanied an exhibition by the same name, and features texts by an international team of authors, who explore how Canada’s visual identity in the twentieth century was constructed—from within and without—through the dissemination of images in this influential media outlet. Learn more about the exhibition and catalogue. The Faraway Nearby What It Means To Be Seen Archival Dialogues Berenice Abbott Student Gallery Apply for a chance to have your own exhibition in our student gallery! Deadline: May 30, 2019, 11:59 pm EST Learn More The Student Gallery showcases the art and curatorial practices of Ryerson University’s current undergraduate and graduate students and recent alumni from all disciplines. Rigorous yet inclusive, the program engages audiences with important issues through group or solo exhibitions of contemporary art and historical lens-based media. Presenting seven exhibitions per year, the Student Gallery provides valuable, professional experience in the curation and display of artwork. The exhibitions are selected annually, following a call for submissions, by a committee comprised of staff members from the RIC, along with students and faculty members from the School of Image Arts.
https://ryersonimagecentre.ca/gallery/
“Troubling Beauty” showcases original paintings and hand-cut paper collages on view at the Lee Gallery at the Clemson University Center for Visual Arts with an artist talk and closing reception Thursday, Feb. 8. "The World of Jan Brett" is now on display until January 28. An interest in videography has taken Kerns to places she never imagined. A surprise performance by the singer capped a memorable day in Death Valley. The ceramics studio in the department of art at Clemson University will hold the annual Fall Ceramics Bowl Sale from noon to 5 p.m. Wednesday, Nov. 15, in the hallway in front of the Lee Gallery in Lee Hall. Some of Clemson University’s most-honored fiction writers and poets will read at Writers’ Harvest, a campus benefit for Loaves & Fishes and Paw Pantry at 6 p.m. Nov. 14 in Lee Hall 2-111. The list of performers includes Keith Lee Morris, Caroline Young, Will Stockton and Mike Pulley. Faculty will be joined on stage by this year’s winners of the Writers’ Harvest Student Reader Awards. The Spirit of the Tiger is a new addition to the area near Tiger Band Plaza. The work of artists T.J. Dixon and Jim Nelson is now on display in the Brooks Center Lobby. An exhibit celebrating the artwork of eight award-winning Upstate women is being presented at the Lee Gallery at the Clemson University Center for Visual Arts through Nov. 8. With its medieval architecture, blue gulf waters, galleries and bountiful history, it’s no wonder Hannah Gardner, a Clemson University senior studying visual arts, chose to spend six weeks in Genoa, Italy. Shakin' the Southland: Growing up, this Tiger was told that he would never make a living from his music. Today, he does just that. The Clemson University Tiger Band helps add to the excitement of football weekends with the Tiger Band Kidz Klub and 90 Minutes Before Kickoff. The Center for Visual Arts (CVA) at Clemson University has something for everyone this fall, from visiting artists to student exhibitions and seminars. Designing collaboration This New Orleans-born Tiger ditched her law school plans to accept a scene-design assistantship and never looked back. She has worked all over the country, and today, she combines her love of art and theatre –not only to design sets, but to create opportunities for interdisciplinary collaboration with Clemson faculty and students to […] The Clemson University College of Architecture, Arts and Humanities’ School of Architecture will host the first-ever “Women in Architecture” lecture series this fall with five notable women in the industry. Co-sponsored by the Clemson Architectural Foundation and AIA Charleston, the series kicks off Monday and runs through Oct. 23. There will be four lectures held in Clemson and two in Charleston.
http://newsstand.clemson.edu/tag/arts/page/3/
For over two decades Sharjah Art Musuem has positioned itself as one of the most prominent cultural and art destinations locally, regionally and internationally, while providing a platform for local artists to showcase their talent. Unlike classical and contemporary Western artists whose masterpieces find display for the world to see and admire, thousands of artists across the Middle East have long produced their works of art in silence and without recognition. The museum which is celebrating its milestone silver jubilee this year was established with the aim of bringing Arab artistic talent to light - and one can say it has been doing that spendidly over the years. Manal Ataya, Director-General of Sharjah Museums Authority, told Khaleej Times: “It places Arab artists on the global map. Visitors get a glimpse of the region’s rich arts ecosystem that is otherwise unknown.” The museum, which opened its doors in 1997, has been a pillar of the art scene in the country and region. With entry free, it houses dynamic expressions of Arab and international artists showcased through 300+ artworks across 64 halls. Located in the traditional Arts Square in Al Shuwaiheen area, the museum is surrounded by heritage buildings which combine to give visitors an authentic experience of Sharjah’s rich history and culture. Besides organising temporary exhibitions, the building, which consists of two wings interconnected by two passageways over an interior street, displays an impressive permanent collection of Arab modern art including works by artists such as Abdulqader Al Rais, Louay Kayali, Mona Saudi, Najat Mekky, Bashir Sinwar and Faiq Hassan, among others. It also has a permanent gallery with highlights from the renowned orientalist collection of His Highness Dr Sheikh Sultan bin Muhammad Al Qasimi, Supreme Council Member and Ruler of Sharjah. Through the works of emerging and established artists, visitors can also get a glimpse into the development of the artistic movement in the Arab region through selected artworks from the collection of Barjeel Art Foundation on display at the museum. Supporting facilities like the Emirates Fine Arts Society, which was established in the surrounding area in 1980, boosted the status of the museum as a meeting point for artists, experts, academic faculty and researchers. “Artists from all over the region met and shared their ideas at a time when art was still growing and maturing in the region. This heritage area served as one of the earlier platforms for artists to flourish,” said Ataya. Later in 2011, the annual ‘Lasting Impressions’ exhibition series was also launched to feature the works of the Arab region’s most distinguished artists. “Many of these Arab artists’ practice spans decades, however they have yet been given the opportunity to have a retrospective that showcases their work. The space we provide contributes to exposure for the artist and introduces their talent to the UAE and the world,” added Ataya. The museum has always been part of the large-scale contemporary art exhibition - Sharjah Biennial, and also hosts the annual Sharjah Islamic Arts Festival which is organised by Sharjah Department of Culture. With an increased exposure to artists, Sharjah Arts Authority is now eyeing equal representation for both genders in the field. “We are focusing on highlighting the works of more women in the field and boosting diversity and giving exposure to underrepresented categories in the art scene,” stressed Ataya. By bridging academic links, Sharjah Museums Authority aims to enrich the online research corpus on Arab artists, placing the museum as an important educational resource for researchers and university students. Ataya said: “One of the biggest challenges facing the art scene in the Middle East is the absence of credible academic content on Arab artists, especially when compared to the West.” The galleries, exhibitions and publications at the museum provide university faculties and students with rich primary research resources and archives. By hosting artists, the museum looks to inspire interest among researchers to tackle different forms of art, under the broader objective to make regional art as accessible as possible. Ataya said: “Art is the mirror of humanity and a visual documentation of the social and political life of any country. It should be the channel through which we introduce our story to the rest of the world.” “The museum plays a critical part in the journey to open the door for people to learn. Over the years, it has the impetus of major art business deals, academic publications, and public awareness on different forms of art,” she added. The museum’s library offers more than 5,000 Arabic and English books related to artists and art history as well as other genres such as design, photography, sculpture and architecture. Establishing the museum as an inclusive space is another key focus, as the facility continues to hold an annual series of dynamic free programmes that include activities, discussion sessions, and interactive workshops targeting families and children, while also regularly organizing various community programmes that are specifically designed for children with disabilities. On the museum’s 25th anniversary, Ataya said the authority renews its pledge “to support Arab artists and dedicate efforts to provide them with greater exposure through exhibitions and publications. We are also committed to our audience of families and school children ensuring they have the opportunity to learn about art and enhance their creativity through workshops and other programmes.” ALSO READ:
https://www.khaleejtimes.com/arts-and-culture/sharjah-art-museum-25-years-of-making-art-speak
AIR is a contemporary gallery and exhibition space which showcases emerging and early career artists; bringing cultural diversity and international collaboration of the arts within the local community. Our grass-roots programme focuses on providing a platform for artists starting out or in the early stages of their career. Our aim is to provide opportunities for experimentation and exposure through exhibitions, events and residency schemes. The gallery has a structured exhibition programme which will include annual open exhibitions, outreach programmes and opportunities for existing local artists and art groups. By hosting regular exhibitions, we hope to increase the awareness and education of the arts in the local area. The gallery spaces is also available to hire for meetings, projects and exhibitions. Please contact the gallery for details on hire costs. Comment on At What Cost?
https://www.artrabbit.com/organisations/air-gallery-altrincham-united-kingdom
Welcome from the Chair of Art & Design Welcome to the Department of Art & Design, one of the oldest departments in Canada devoted to the study of both the practice and the history of the visual arts and design. The Department of Art & Design traces its history back to 1945, when the University of Alberta established the first fine arts department in the Province. Today, we are a vibrant community of artists, designers, and academics that are devoted to supporting our students, moving forward our respective disciplines and positively impacting the world. We recognise the unique practices and responsibilities that our disciplines offer to create new, inspiring and proactive futures. We offer internationally competitive undergraduate and graduate programs in design, the fine arts, and the history of art, design, and visual culture. Our programs are taught by remarkably accomplished practitioners and scholars. Our Design programs focus on improving the human condition and benefits from situating design within a research-intensive university enabling unique collaborations for the discipline. Our Fine Arts programs have strong ties to public and artist-run galleries and institutions within the community and beyond, which expand the breadth of students' university experience. The History of Art, Design, and Visual Culture helps to bring these units together, providing the historical, theoretical, and critical framework for understanding these fields and their impact on society. Our students have extensive opportunities for situating their academic work outside of the studio and classroom—through practicums, work studies and study abroad—gaining valuable experience and training for transitioning to life after University—whether in work, research or further studies. Our FAB Gallery features a range of exhibitions throughout the year including touring design shows, graduate MFA exhibitions, year-end BFA and BDes showcases and exciting artist collectives. For more information please contact me at (780) 492.7864 or [email protected].
https://www.ualberta.ca/art-design/about-art-and-design/welcome-from-the-chair.html
A small selection of artworks and social history items from the gallery and museum collections portraying people of Bradford, celebrating the district’s talents and inspiration. With links to glass designer Kalim Afzal and Manningham based photographer Nudrat Afza and of course, David Hockney. From innovators such as Titus Salt, Christopher Pratt and the Bronte sisters to reformists Miriam Lord, Margaret Macmillan and Richard Oastler, visit this display of Bradford’s history and talent. The exhibition showcases the resilient strength and challenging inspiration, the display includes a selection of watercolour Bradford street characters and dignitaries of the Victorian period, by John Sowden. Over 16,000 digitised images from BMG collections are available to view. See the Bradford Museums and Galleries archive images at www.photos.bradfordmuseums.org Bradford Museums and Galleries tell the story of Bradford and its people through exhibitions, displays, education and community engagement across its four sites. We Are West Yorkshire: A Celebration of people and place We Art West Yorkshire brings together five new exhibitions from the county’s local authority museums. Each exhibition celebrates some of the different qualities that make West Yorkshire so unique, as well as showcasing the amazing richness and variety of our museum collections. Castleford will look at the history and impact of Rugby League – the sport which was born in West Yorkshire. Halifax will bring together some amazing art inspired by our county. Huddersfield will explore the lives of women from the region up until 1918. Leeds will showcase the stories and experiences of migrants to the city throughout history. We hope you will enjoy exploring this kaleidoscopic tribute to our county.
https://bradfordmuseums.org/whats-on/we-are-west-yorkshire-bradford-people
The nonprofit gallery and foundation strives to bring innovative and experimental visual art to a wide audience and to provide a place for continued development of artistic potential, experimentation and dialogue. ARC also works to educate the public on various community-based issues by presenting exhibits, workshops, discussion groups and programs for, and by, underserved populations. Through Nov. 3: "BLOX": The 40th-anniversary exhibit showcases the work of current and affiliate ARC gallery members, with the work focusing on constructing art as a sum of parts. 111 S. Michigan Ave.; 312-443-3600, artinstituteofchicago.org The Art Institute is one of the world's most famous art museums, particularly known for its collection of French impressionist and post-impressionist paintings by artists such as Monet, Renoir, Seurat and Caillebotte. Works from galleries housing this collection include "A Sunday on La Grande Jatte — 1884" by Seurat, "American Gothic" by Grant Wood and "Nighthawks" by Edward Hopper. Other galleries take visitors through the art of ancient, medieval and Renaissance Europe; decorative arts like the popular Thorne Miniature Rooms; textiles of the world; prints and drawings; and architecture and photography. The museum's 264,000-square-foot Modern Wing, designed by Renzo Piano, features 20th- and 21st-century works, including European painting and sculpture, contemporary art, architecture and design, and photography and electronic media. Ongoing: "Chagall's America Windows": One of the Art Institute's most asked-about works, the large stained glass windows by Belarusian artist Marc Chagall commemorate the American Bicentennial and are the centerpiece for a presentation of public art in the Rubloff Auditorium. Art Museo InterContinental Chicago O'Hare, 5300 N. River Road, Rosemont; 847-544-5300, icohare.com The InterContinental Chicago O'Hare hotel may not be the first place you would think to go for fine art, but you can now. This gallery space at the hotel is a showcase for the work by recent graduates of Chicago-area fine art schools and also a platform to display work from the gallery's permanent collection that includes art by Wesley Kimler and Ronald Clayton. Gallery tours are also offered. Ongoing: "Elevate": Artists turn simple, everyday things that can be overlooked into art that invites viewers to "stop, look, listen, enjoy." Artwork on display includes Robert Rauschenberg's screen prints of news clippings, landscapes by Kevin Malella and paintings with objects of nature embedded in them by Constance Pohlman. Beverly Arts Center 2407 W. 111th St.; 773-445-3838, beverlyartcenter.org The multidisciplinary, multicultural center offers fine-arts education, programming and entertainment, including art, music, dance and theater, as well as exhibitions of contemporary art in four galleries by established and emerging artists. Through Sunday: "Art 19": Part of Chicago Artists Month, the exhibit features the work of Brian Richard, Brigit Scales Fennessy, Susannah Papish, Robert Workman, Raymond Broady, Danielle Principato, Sandra Leonard, John Colson, Andy Plioplys, Dalton Brown, Cecil McDonald, Sally Campbell, Baird Campbell, Cathy Sorich and Jon Bakker. DePaul Art Museum 935 W. Fullerton Ave.; 773-325-7506, museums.depaul.edu The newly opened three-story museum more than doubles the space it previously occupied at DePaul's Richardson Library. It includes space for class use, programs and events. The art museum reflects DePaul's broad commitment to the arts and parallels the university's Performing Arts Campaign to improve the physical space for theater and music education and performance on the Lincoln Park Campus. Through Nov. 19: "The Nature Drawings of Peter Karklins": The abstruse pencil drawings for this exhibit by the architectural model maker, who became a security guard after computer technology took over model-making, come from Karklins' work done while traveling to and from work and at his desk on the night shift. Karklins' complex drawings are his interpretation of the "disturbing processes just beneath the surface of human life." Elmhurst Art Museum 150 Cottage Hill Ave., Elmhurst; 630-834-0202, elmhurstartmuseum.org Exhibiting late 20th- and 21st-century American contemporary art, the museum is located in the AIA award-winning building designed around McCormick House, one of only three Ludwig Mies van der Rohe-designed residences. In addition to exhibitions of art, from national touring shows to Chicago and Illinois artists, the museum offers public tours, programs, guest lectures and art classes. Through Jan. 5: "No Rules: Contemporary Clay": The third exhibit in the museum's series on traditional ceramics and new developments is a group show focusing on clay-based work. A variety of styles are on display, from large- and small-scale sculpture and performance art to video and photography. Hyde Park Art Center 5020 S. Cornell Ave.; 773-324-5520, hydeparkart.org The "contemporary art exhibition space, learning lab, community resource and social hub for artists and art-curious alike" offers exhibitions by mostly Chicago-based artists, education programs for children and adults at all levels and diverse creative programming. Through Nov. 11: "Ground Floor": The exhibit highlights 11 recent MFA recipients who have, for now, chosen to remain in Chicago. "Ground Floor" is installed with each piece given plenty of room and playing nicely with one another, with no single work grabbing an unseemly amount of attention. Institute of Puerto Rican Arts and Culture 3015 W. Division St.; 773-486-8345, iprac.org A member of Museums in the Park, the institute celebrates Puerto Rico's identity and heritage, offering community arts and cultural programming, including visual art exhibitions, hands-on community arts workshops, films in the park and an annual outdoor fine arts and crafts festival. Through Dec. 31: "Fotos de Ramon Frade (Frade's Photos)": The exhibit is a collaboration between IPRAC and the Dr. Pio Lopez Martinez Museum of Art at the University of Puerto Rico in Cayey and includes photographs by the early-20th-century Puerto Rican artist Ramon Frade Leon. The exhibit is also a study of Frade's creative process as a painter before photography became his go-to medium. Sketches and drawings are on display next to the finished paintings to show the process from rough draft to finished artwork. Intuit: The Center for Intuitive and Outsider Art 756 N. Milwaukee Ave.; 312-243-9088, art.org The nonprofit organization is dedicated to presenting self-taught and outsider art and holds international exhibitions, a permanent collection with more than 1,100 works and the Henry Darger Room Collection. Intuit's Robert A. Roth Study Center, a noncirculating collection with a primary focus in the fields of outsider and contemporary self-taught art, is a resource for scholars and students, and offers educational programming for people of all interest levels. Through Jan. 5: "Hawkins/Hawkins: One Saw Everything, One Saw Nothing": The exhibit brings together the work of two folk art masters, William Hawkins and Hawkins Bolden, who use everyday materials to create complex images. Lizzadro Museum of Lapidary Art 220 Cottage Hill Ave., Elmhurst; 630-833-1616, lizzadromuseum.org Named for lapidary collector Joseph Lizzadro, whose hobby turned into the massive Lizzadro Collection, the museum displays more than 200 pieces of jade and other hard stone carvings. They include a nephrite jade imperial altar set completed during the Ming dynasty (1368-1644). Ongoing: "The Rock & Mineral Experience": A hands-on exhibit where visitors can learn more about earth science, lapidary materials, mineral specimens and fossils. Loyola University Museum of Art 820 N. Michigan Ave.; 312-915-7600, luc.edu/luma Loyola's art museum is dedicated to exhibits that focus on spirituality in art. Ongoing: "Gilded Glory: European Treasures From the Martin D'Arcy Collection": The collection of more than 500 works from the 12th through 19th centuries is considered one of the finest of medieval, Renaissance and Baroque art in the Midwest. Mary and Leigh Block Museum of Art Northwestern University, 40 Arts Circle Drive, Evanston; 847-491-4000, blockmuseum.northwestern.edu The North Shore fine-arts museum focuses on visual arts programming, from exhibitions and lectures to symposiums and workshops with artists and scholars, and also through screenings of classic and contemporary films at Block Cinema. An expanding permanent collection consists primarily of works on paper. Through Dec. 9: "Shimon Attie — The Neighbor Next Door": The multimedia artist known for his two decades of site-specific work that reflects on the relationship between place, memory and identity, has re-envisioned his 1995 project that features archival film footage taken secretly by people forced into seclusion by the Nazis. The original installation was projected onto the sidewalks of Amsterdam from apartments where many groups hid during World War II. Museum of Contemporary Art 220 E. Chicago Ave.; 312-280-2660, mcachicago.org One of the nation's largest modern art museums offers thought-provoking art created since 1945. The permanent collection includes work by Franz Kline, Andy Warhol and Jeff Koons. The museum highlights surrealism of the 1940s and '50s, minimalism of the 1960s, conceptual art and photography from the '60s to the present, recent installation art, and art by Chicago-based artists. Besides art in all media and genres, the MCA has a gift store, bookstore, restaurant and 300-seat theater and a garden with views of Lake Michigan. Through Dec. 31: "Work No. 1357, MOTHERS": The 48-foot-wide, 20-plus-foot-tall sculpture on the museum's plaza, which rotates 360 degrees, is the largest kinetic sculpture London-based artist Martin Creed has created. The exhibit is part of Creed's yearlong residency at the MCA, "Martin Creed Plays Chicago," and the latest show in the MCA Chicago Plaza Project series. National Museum of Mexican Art 1852 W. 19th St.; 312-738-1503, nationalmuseumofmexicanart.org Located in Chicago's Pilsen/Little Village communities, the museum exhibits traditional and contemporary Mexican art prints and drawings, papier-mache, ceramics, photographs and avant-garde installations from local and international artists. NMMA also brings children in by the busload to see art demonstrations and hear storytellers. Each year around Halloween, it hosts the city's most-visited Day of the Dead exhibit. Through Dec. 16: "Dia de los Muertes": It's the 26th year for the largest annual U.S. Day of the Dead exhibit which includes ofrendas and Hanal Pixan representations in installations and other works from a diverse group of Mexican artists. Logan Center for the Arts University of Chicago, 915 E. 60th St.; 773-702-2787, logancenter.uchicago.edu The mission for this facility is to be a creative hub for students, artists and the public through programming in cinema, media studies, creative writing, music, theater and performance art. The 11-story, 184,000-square-foot cultural arts center, designed by Tod Williams and Billie Tsien, houses classrooms, studios, rehearsal rooms and exhibition and performance spaces. Ongoing: "2011 Artists-in-Residence": The exhibit from the Arts & Public Life and the Center for the Study of Race, Politics and Culture features new works by Faheem Majeed, Cathy Alva Mooses and Eliza Myrie. Smart Museum of Art, University of Chicago 5550 S. Greenwood Ave.; 773-702-0200, smartmuseum.uchicago.edu The museum is home to special exhibitions and a collection that spans 5,000 years of artistic creation. Working in close collaboration with scholars from the University of Chicago, the museum has established itself as a leading academic art museum and an engine of adventurous thinking about visual arts and their place in society. Through Dec. 16: "Chris Vorhees and SIMPARCH: Uppers and Downers": The next installation of the Threshold series is an abstract landscape that fills the reception hall of the museum. A kitchen cabinetry, countertop and sink formation is reworked into a large-scale rainbow arching over a waterfall, playing on the utopian promise that restraint yields bliss. SAIC Sullivan Galleries 33 S. State St., Seventh Floor; 312-629-6635, saic.edu/exhibitions The teaching gallery brings to Chicago audiences the work of acclaimed and emerging artists, while providing the School of the Art Institute of Chicago (SAIC) and the public opportunities for direct involvement and exchange with the discourses of art today, with shows and projects often led by faculty or student curators. Through Jan. 5: "Detroit, USA: Material, Site, Narrative": Detroit has become an emblem for the American city in decline, but there's a lot more happening there than meets the eye. The Motor City's cultural scene is thriving, in part because Detroit's artists like to collaborate, and their projects are often centered on community development and urban rejuvenation. This show surveys a range of such works alongside responses to the city from student artists at the School of the Art Institute. Submit information to [email protected].
https://www.chicagotribune.com/entertainment/ct-xpm-2012-10-24-ct-ent-1025-list-museum-art-20121024-story.html
On Thursday, May 18, Yuchengco Museum offers a day of free admission to all its exhibitions and galleries of the worldwide celebration of International Museum Day (IMD). Every May 18 since 1977, the International Council of Museums has organized IMD to highlight the importance of the role of museums as institutions that serve society and its development. Museums are hubs for promoting peaceful relationships between people, and their collections offer reflections of memories and representations of history. IMD provides museums an opportunity to show they display and depict traumatic experience and encourage visitors to think beyond their own individual experiences. This year, IMD focus on “Museums and contested histories: Saying the unspeakable in museums,” encouraging institutions to play an active role in peacefully addressing traumatic histories through mediation and multiple points of view. Each gallery at the museum showcases art and creativity in a wide variety of forms, from paintings and photographs to installation art to jewelry, couture, and home furnishings. See the retrospective of Burmese jewelry designer Wynn Wynn Ong, Ryan Arbilo’s photographs of overseas Filipino workers in France, and paintings by National Artists for Visual Arts from the museum collection. The museum is located at RCBC Plaza, corner Ayala and Sen. Gil J. Puyat Avenues, Makati. Museum hours are Monday to Saturday, 10 a.m. to 6 p.m. For more information, call (632) 889-1234, visit yuchengcomuseum.org, or follow @YuchengcoMuseum on social media.
https://www.manilarepublic.com/free-admission-yuchengco-museum-may-18-international-museum-day/4555/
Announcing “La Dérive” , an art walk and reception to be held on May 1 2011 from 4- 6pm in Pont-Aven. “La Dérive” showcases four exhibitions organized by students of Pont-Aven School of Contemporary Art in collaboration with three Pont-Aven galleries. Participating galleries and artists include: Galerie IzArt 13 Rue du Port “Image/Imago”: Danielle Dillon, Christopher Lee, Sarupa Sidaarth, Ray Zarnowitz, Nathan Zeidman.
http://www.galerie-izart.com/en/imageimago/
The MA in Exhibition + Museum Studies challenges students to consider the shifting and expanding role of visual culture to society and to scrutinize how methods of display alter, inhibit, or promote the work of artists. Students focus their questions and research on museums, galleries, and other forums for display, including alternative sites, communities, borders, and places. Exhibition and Museum Studies considers how socioeconomic, political, and cultural contexts affect creative production, and how exhibitions become—in and of themselves—contemporary art 2 years | 42 units | 2 full-time semesters | 2 semesters of thesis and option for offsite work Curriculum With only 6 units now required per semester in the second and final year (36 units total), students can take advantage of boundless opportunities to deepen their individual practice and create networks in the broader art world. MA scholars work alongside and in collaboration with artists in SFAI’s renowned MFA program. The program's structure allows students to:
https://sfai.edu/degree-programs/graduate/ma/exhibition-and-museum-studies-ma
Finding your favorite art blog or an exhibition opening calendar might not be easy. Especially when the city offers 500 galleries, more than 150 independent project spaces, and 175 museums. That is why we decided to put together a reliable listing with 35 links to art event calendars, online magazines, art blogs, social networks for creatives and mobile apps which will guide you through Berlin art scene. For better orientation we created tags: #events #calendar #editorial #network #profiles #newsletter. In this way, you can quickly navigate and narrow your choice. If we missed something important, write us. Art Event Calendars INDEX Berlin http://www.indexberlin.de/ Index Berlin is informing about a selection of exhibitions and activities of contemporary art in Berlin. #events #calendar #newsletter Landesverband Berliner Galerien e.V. https://www.berliner-galerien.de/ LVBG provides regular information on the current program of galleries and exhibition venues in Berlin, and offers a variety of platforms for members and interested parties, such as guided gallery tours and international trade fair presence. #events #calendar #newsletter Galerien Berlin Mitte http://www.galerien-berlin-mitte.de/dates.html A bi-yearly flyer and website that keeps one up-to-date on the galleries, exhibitions, and events around Berlin Mitte area. #events #calendar Berlin Art Grid http://berlinartgrid.com/ A platform providing information about ongoing and upcoming art events, exhibitions in galleries and venues. You can filter by area and genre of activity, as well as submit an exhibition. #events #calendar #newsletter Museumsportal Berlin https://www.museumsportal-berlin.de/en/ Museumsportal Berlin is a comprehensive website of the museums, memorials, palaces and art collections of Berlin. It offers not only an overview of Berlin’s unique selection of museums but also planning guidance and encourages exploration. #events #calendar The Official Events Calendar of Berlin https://www.berlin.de/ausstellungen/ Events calendar of the activities celebrated at government buildings in Berlin. You can search by categories and keywords. #events #calendar Projektraumkalender http://www.projektraeume-berlin.net/termine/ Listing of art events organized annd curated by artist-run and project spaces. #events #calendar Berlin Gallery District http://www.berlingallerydistrict.com/ Art in the city center – Galleries around Checkpoint Charlie. More than fifty international galleries and five major museums within a square kilometer at the heart of the capital: this is the Berlin Gallery District. #events #calendar Online magazines Art-in-berlin http://www.art-in-berlin.de/ Art-in-berlin provides up-to-date information about contemporary art events from museums, galleries, art associations, etc. in Berlin. #events #calendar #newsletter #editorial Berlin Art Link http://www.berlinartlink.com/ Berlin Art Link is a platform offering all kinds of information about the local (Berlin) and international art and culture communities. They present the latest in art, design, music, film, fashion and architecture. #events #calendar #newsletter #editorial Bpigs http://bpigs.com/ Bpigs is an artist run communications platform. Their focus is on highlighting the contemporary art scene from the producer’s point of view. The exhibition guide comes out every two months and is distributed for free in project spaces, galleries, institutions, and bookstores. #events #calendar #editorial #newsletter Occulto http://www.occultomagazine.com/ An independent magazine bringing together science, humanities and the arts, mostly focused on natural and formal sciences, their history and cultural relevance, and featuring original artists’ projects. #events #editorial Contemporary And (C&) http://www.contemporaryand.com/ Contemporary And (C&) is a space for the reflection on and linking together of ideas, discourse and information on contemporary art practice from diverse African perspectives. #events #editorial #newsletter artmagazine http://artmagazine.cc/ Online art newspaper. Art criticism, exhibition discussions, reports on auctions and art fairs, events, commentaries and glosses on the art scene, exhibition database, tips for art collectors. The artmagazine offers all readers the opportunity to give their opinion on articles, exhibitions and topics. #events #calendar #newsletter #editorial Kunsttexte http://www.kunsttexte.de/ kunsttexte is set up as a platform for academic writing in the areas of art history and visual history. #editorial #newsletter ASK HELMUT https://askhelmut.com/berlin Recommendations for Berlin concerts, exhibitions, films, dance, theater and everything in between and beyond. #events #calendar Mit Vergnügen http://mitvergnuegen.com/ An online magazine from Berlin. They recommend concerts, exhibitions, parties or restaurants everyday from a personal point of view. #events #newsletter #editorial Freunde von Freunden http://www.freundevonfreunden.com/city/berlin/ An international network formed by individuals from diverse creative and cultural background, collectively interested in art, urban living, food, mobility and design. #editorial #newsletter ARTBerlin http://www.artberlin.de/ ARTberlin.de is an online magazine for art in Berlin and all around the world featuring artists, collectors, and gallerists – the personalities behind the art. Here you can discover established and young artists at their favorite Berlin spots. #events #calendar #editorial Social networks Creative City Berlin http://www.creative-city-berlin.de A platform for artists, cultural producers and the creative industry in Berlin. They have the latest on funding programmes, workshops, events and job vacancies in town. In their CCB Magazine they speak to Berlin‘s creatives, connect with other partners and platforms and present key players from the industry. #events #newsletter #network #profiles #editorial Artconnect Berlin http://www.artconnectberlin.com It is a global and local social network for creatives, creative businesses and art lovers. Websites makes it easy for you to engage and connect with your local creative scene, and to find and share new job offers, events, projects or creative spaces. #events #newsletter #network #profiles #editorial Art Blogs Art Fridge http://www.artfridge.de/ A collection of essays, dialogues and interviews between emerging art historians artists and curators. #editorial Berlin Poche http://berlinpoche.de/ A platform for the report of cultural local news from the perspective of the Francophile communities. #events #calendar Berlin Global http://www.berlinglobal.org/ An online news outlet, providing a platform to report on cultural diplomacy news and practice. #events #editorial Berlin Loves You http://berlinlovesyou.com/ A blog for news, events, startups, performers, food, restaurants, beer, live music, love, shopping, fashion, lifestyle and anything else to do in Berlin. #events #calendar Axel Daniel Reinert https://axeldanielreinert.wordpress.com/ A blog led by informing about art news and exhibitions in Berlin. #events #calendar #newsletter #editorial Mobile Apps Exhibitionary http://www.exhibitionary.com/ A mobile gallery guide covering global art destinations, from galleries to major institutions and experimental project spaces. #events #calendar #newsletter Art Rabbit https://www.artrabbit.com/places/germany/berlin A guide to the contemporary art scene, connecting thousands of art spaces, exhibitions and events to artists, art professionals, collectors, students and art-interested people alike.
http://berlinsessions.org/find-exhibitions-berlin-art-events-calendars/
This exhibition presents three newly acquired sculptural-based video works by Pfeiffer that focus on the history of sports culture and spectatorship. - - REMIX: Sol LeWitt August 7, 2010–February 27, 2011 Although wall drawings represent the foundation of his practice, Sol LeWitt’s works on paper, sculptures, artist's books, and writings on Conceptual art were equally important to his oeuvre. - - Making a Connection: Language and Imagery (Education Exhibition) January 15–February 9, 2011 The Exploring the Arts program at Shea’s Performing Arts Center introduced students to a number of art forms, including the visual, literary, and performing arts. The theme of this exhibition is writing inspired by visual art. - - Beyond/In Western New York 2010: Alternating Currents September 24, 2010–January 16, 2011 This international contemporary art exhibition—the product of a unique curatorial collaboration between twelve of Western New York’s museums and galleries—showcases the work of more than 100 extraordinary artists from the region and beyond. - - Art, Through the Eyes of Young Children (Education Exhibition) December 8, 2010–January 12, 2011 This exhibition features original works of art by infants and school-aged children from the Buffalo State Child Care Center. The children have used different materials and techniques to create distinctive works, both individually and as a group. - - Forty: The Sabres in the NHL November 7, 2010–January 9, 2011 Forty: The Sabres in the NHL—featuring more than two hundred photographs by Ron Moscati, Robert Shaver, and Bill Wippert—celebrates forty illustrious years of the National Hockey League in Buffalo. - - Celebrating Disability History Week October 1–December 6, 2010 Artists from The Arts Experience, Starlight Studio and Art Gallery, St. Mary's School for the Deaf, and the Albright-Knox Art Gallery's Matter at Hand program come together to celebrate Disability History Week. - - ECHO: Sampling Visual Culture June 25–October 10, 2010 ECHO: Sampling Visual Culture will explore a selection of contemporary artists from the Gallery's Permanent Collection who incorporate humor and appropriation into their artmaking. - - Clyfford Still June 25–August 29, 2010 The Albright-Knox Art Gallery owns the largest public collection of paintings by the American Abstract Expressionist Clyfford Still—an ensemble of thirty-three abstract works that span the most critical developments of his career from 1937 to 1963. - - Fletcher Benton: The Alphabet July 30, 2009–July 5, 2010 Renowned American sculptor Fletcher Benton is best known for cutting, folding, and realigning two-dimensional sheets of steel into three-dimensional objects that seem to defy gravity. Today Monday October 24 The museum is closed. Please visit us tomorrow between 10 am and 5 pm. Where Fees |$12|| | Adults |$8|| | Seniors |$8|| | Students |(ages 13 and up)| |$5|| | Children 6-12 |FREE|| | Members |and children 5 and under| More Past Exhibitions Learn more about the Albright-Knox's past exhibitions with the following timelines:
https://www.albrightknox.org/exhibitions/past-exhibitions/p:14/r:10/
Here are some general tips for attending arts events: - Arrive a few minutes early to find a place to sit if you are attending a lecture or performance. In some cases (such as certain theatre performances), late admittance is not permitted. - Silence your cell phone before the lecture or performance begins. - Double-check the hours. Some museums or galleries are closed to the public on Mondays (or a different weekday). - Look up more information about the guest speaker or artist before attending an event by Google searching the name and visiting any relevant websites. - Pick up programs, handouts, and flyers when available for helpful information about the nature of the performance, guest speaker, or artist. - Draw on your knowledge and life experience and try to make connections with content of the lecture, performance, exhibition, etc. - Be open to what the experience has to offer you. Event Guidelines Most arts events offered at the University of Kansas will count toward the Arts Engagement Certificate. Here is a general list of arts venues and types of arts events that can be count toward your certificate: - All KU theatre performances - All performances and art-related talks at the Lied Center & master classes - All exhibitions, performances, and art related talks at Spencer Museum of Art - All exhibitions, performances, and art related talks at Nelson Atkins - All School of Music events - Senior recitals, thesis exhibitions, and student showcases - Visiting artists and art-related lectures on campus (e.g. at The Commons, Kansas Union, Hall Center, departmental guest artists, etc.) - Optional, hands-on workshops offered by participating Arts Engagement Departments - Film screenings with discussions - Artist talks, exhibitions, and theatre, music, and dance performances at the Lawrence Arts Center - Art Lectures at the Lawrence Public Library - Artist talks and exhibitions at the Student Union Gallery in the Kansas Union - All exhibitions at the Art and Design Building Gallery - Final Fridays (Lawrence) or First Fridays (Kansas City) are fine to attend for visual art; however, you need to write about a specific work of art or exhibition.
https://experience.ku.edu/arts-and-engagement-events
Extra-curricular activities are vitally important at King James’s School and students have the opportunity to take part in a wide range of trips, creative and sporting activities. Learning is enhanced by fieldwork trips, visits to museums, art galleries, the theatre and sporting venues, the Duke of Edinburgh Award, annual ski and watersports trips to Europe, and exchange visits abroad. Our sports facilities are excellent and students can choose from a wide range of sport and fitness-related activities, both recreational and competitive. Our highly successful school teams regularly compete against other schools. Expressive Arts are a particular strength of the school. Music and Drama productions are performed annually and the Art department showcases students’ work through a number of exhibitions each year. The school also encourages participation for all through our active house system. Events run in most subject areas, the most spectacular being House Drama, which is organised by the senior students.
https://www.king-james.co.uk/extra-curriculum/
Exhibitions for art-lovers, the curious or those simply looking for a spare afternoon to fill. Our galleries are open to all - everyone‘s welcome. At Cockington Court we proudly showcase and support a diverse range of art exhibitions. Throughout the year you’ll discover art, craft, and sculpture exhibitions within the Kitchen Gallery – celebrating creativity and the community. From showcases of local degree students’ work, to curated collections, there’s something to suit everyone. Come and experience beautiful, interesting pieces of work and explore our ever-changing seasonal exhibitions. And if you spot something you like, many pieces of our exhibitions are available to purchase to take home on day of sale, please just speak to a member of the team.
https://cockingtoncourt.org/whats-on/seasonal-exhibitions/
The bustling city of Taipei has an equally lively art scene, made up of tremendous museums, galleries and non-profit spaces. The National Palace Museum sits at the helm with over 600,000 artefacts spanning 5,000 years of history. Originally founded in Beijing in 1925, the Museum was relocated to Taipei during the Chinese Civil War in 1948 to safeguard its collection, which includes the famous jade cabbage—a detailed reproduction of a Chinese cabbage head reputed to have been part of the dowry of the Guangxu Emperor's consort Jin Fei in the Qing Dynasty (1644–1911)—along with ancient objects that include Song dynasty paintings and calligraphy, ritual bronzes, ceramics, early printed books and more.Read More Founded in 1983, the Taipei Fine Arts Museum showcases artwork by Taiwanese and international artists from the 19th century to today, and acts as the host for one of the region's best-loved biennials: the Taipei Biennial. The Biennial was borne from two exhibition projects conceived by the Museum to celebrate contemporary art: Contemporary Art trends in the ROC and An Exhibition of Contemporary Chinese Sculpture in the Republic of China, held on alternating years between 1984 and 1991 until they were merged in 1992 to form the Biennial. With a solid institutional backbone, the city is also home to over 30 galleries. These spaces move between supporting contemporary artists and showcasing modern masters, such as Asia Art Center, which represents Li Chen, Chu Weibor and Yang Chihung, among others. The artful balance between modern and contemporary is a defining characteristic of many of the city's galleries, including Tina Keng Gallery, which has been instrumental in forging the careers of many Asian masters, including Zao Wou-Ki, Sanyu, Lin Fengmian and Yun Gee. TKG+—the gallery's experimental sister space—continues this legacy by working with emerging artists and providing a platform for experimentation across different media. Taipei's galleries and art spaces are spread across the city. Those looking to avoid travel can spend time in Huashan 1914 Creative Park, which possesses a similar structure to Beijing's 798 Art Zone. The complex originated in 1914 as a wine factory and camphor refinery. It was vacated in 1987 and artists advocated for it to be used as an art space in 1997. The red brick buildings now host a number of exhibitions throughout the year, interspersed with cafés and knick-knack shops. The city's more experimental spaces include TheCube Project Space, located in an alleyway in the southern part of the city. Founded by independent curator Amy Cheng and music critic Jeph Lo, TheCube takes pride in being one of few art spaces in the city that has the capacity to organise 'quality international exhibitions on a non-profit basis'. As such, TheCube has become a site of lively cultural exchange—a position that has been harnessed by the city as a whole. At the freshly opened Winsing Art Place in Taipei, works by Vietnamese-Danish artist Danh Vo are being presented in Taiwan for the first time. In this video, the founder of Winsing Arts Foundation, Jenny Yeh, introduces Vo's exhibition. As Taipei Dangdai returns for its second edition between 17 and 19 January 2020 at the Nangang Exhibition Center, a selection of exhibitions across the city confirm Taipei as one of the region's most exciting art hubs. In Taipei , the artwork that said most about the contemporary art market's fraught situation in East Asia was not at the 26th Art Taipei (18–21 October 2019), but across town at the Taiwan Contemporary Culture Lab, a publicly funded art park established in Taiwan's former Air Force Command Headquarters in 2018. Chin Cheng-Te's Tender Soul –... Taiwanese artist Charwei Tsai's memorising and compulsive writing of the Heart Sutra—a Buddhist scripture that distills the wisdom of impermanence—is at the heart of her practice. Over the past ten years, Tsai has moved from writing to drawing, photography, and film—a selection of which is being presented at the Centre for...
https://ocula.com/cities/taiwan/taipei-art-galleries/
The RMCAD campus is home to multiple galleries featuring dynamic and innovative work from contemporary artists and designers, RMCAD alumni, current students and faculty. Open to the public, these galleries serve as a place to foster critical discourse around art and design for RMCAD and the broader community, by presenting exhibitions of challenging, educational and significant work and projects. Galleries on Campus Philip J. Steele Gallery Named in memory of RMCAD’s founder, the Philip J. Steele Gallery is the largest and most prestigious exhibition space on campus. Exhibitions include semesterly Graduation Exhibitions, Biannual Faculty + Staff Exhibitions, the Annual Student Exhibition and a variety of nationally renowned visiting artists and designers. Gallery Hours: Monday–Friday, 11 a.m.–4 p.m. Rotunda Gallery The Rotunda Gallery focuses on exhibitions featuring work by the RMCAD faculty, alumni and local artists and designers in one of the college’s most unique buildings. Gallery Hours: Monday–Friday, 11 a.m.–4 p.m. Rude Gallery The Rude Gallery showcases work and projects proposed by RMCAD students and features the Annual Student Symposium Exhibition. This intimate gallery offers close interactions with works and encourages experimental projects and installations. Gallery Hours: Monday–Friday, 11 a.m.–4 p.m. Never Miss an Event To receive periodic updates about new exhibitions on our campus, please fill out the form below and select Galleries / Exhibitions. If you have questions about our galleries, email [email protected] or call 800.888.ARTS.
https://rmcad.celsiusmarketing.net/art-events/galleries/
Taste buds are small structures on the upper surface of the tongue, soft palate, and epiglottis that provide information about the taste of food being eaten. The human tongue has about 10,000 taste buds. Contents Types of papillae The majority of taste buds on the tongue sit on raised protrusions of the tongue surface called papillae. There are four types of papillae present in the human tongue: - Fungiform papillae - as the name suggests, these are slightly mushroom shaped if looked at in section. These are present mostly at the apex (tip) of the tongue. - Filiform papillae - these are thin, long papillae that don't contain taste buds but are the most numerous. These papillae are mechanical and not involved in gustation. - Foliate papillae - these are ridges and grooves towards the posterior part of the tongue. - Circumvallate papillae - there are only about 3-14 of these papillae on most people, and they are present at the back of the oral part of the tongue. They are arranged in a circular-shaped row just in front of the sulcus terminalis of the tongue. It is known that there are five taste sensations: - Sweet, Bitter, and Umami, which work with a signal through a G-protein coupled receptor. - Salty and Sour, which work with ion channels. Localization of taste and the human "tongue map" Contrary to popular understanding that different tastes map to different areas of the tongue, taste qualities are found in all areas of the tongue. . The original "tongue map" was based on a mistranslation of a German paper that was written in 1901 by a Harvard psychologist . Sensitivity to all tastes occurs across the whole tongue and indeed to other regions of the mouth where there are taste buds (epiglottis, soft palate). Structure of taste buds Each taste bud is flask-like in shape, its broad base resting on the corium, and its neck opening by an orifice, the gustatory pore, between the cells of the epithelium. The bud is formed by two kinds of cells: supporting cells and gustatory cells. - The supporting cells are mostly arranged like the staves of a cask, and form an outer envelope for the bud. Some, however, are found in the interior of the bud between the gustatory cells. - The gustatory cells occupy the central portion of the bud; they are spindle-shaped, and each possesses a large spherical nucleus near the middle of the cell. The peripheral end of the cell terminates at the gustatory pore in a fine hair-like filament, the gustatory hair. The central process passes toward the deep extremity of the bud, and there ends in single or bifurcated varicosities. The nerve fibrils after losing their medullary sheaths enter the taste bud, and end in fine extremities between the gustatory cells; other nerve fibrils ramify between the supporting cells and terminate in fine extremities; these, however, are believed to be nerves of ordinary sensation and not gustatory. See also References - ↑ Huang A. L., et al. "The cells and logic for mammalian sour taste detection"., Nature, 442. 934 - 938 (2006). - ↑ Scenta. How sour taste buds grow. URL accessed on August 28, 2006. - ↑ Roberts, David. 2002. Signals and Perception. Palgrave MacMillan. - ↑ Hänig, D.P., 1901. Zur Psychophysik des Geschmackssinnes. Philosophische Studien, 17: 576-623. - ↑ Collings, V.B., 1974. Human Taste Response as a Function of Locus of Stimulation on the Tongue and Soft Palate. Perception & Psychophysics, 16: 169-174.
https://psychology.fandom.com/wiki/Taste_buds
The inner ear contains parts (the nonauditory labyrinth or vestibular organ) that are sensitive to acceleration in space, rotation, and orientation in the gravitational field. Rotation is signaled by way of the semicircular canals, three bony tubes in each ear that lie embedded in the skull roughly at right angles to each other. These canals are filled with fluid called endolymph; in the ampulla of each canal are fine hairs equipped with mechanosensing stereocilia and a kinocilium that project into the cupula, a gelatinous component of the ampulla. When rotation begins, the cupula is displaced as the endolymph lags behind, causing the stereocilia to bend toward the kinocilium and thereby transmit signals to the brain. When rotation is maintained at a steady velocity, the fluid catches up, and stimulation of the hair cells no longer occurs until rotation suddenly stops, again circulating the endolymph. Whenever the hair cells are thus stimulated, one normally experiences a sensation of rotation in space. During rotation one exhibits reflex nystagmus (back-and-forth movement) of the eyes. Slow displacement of the eye occurs against the direction of rotation and serves to maintain the gaze at a fixed point in space; this is followed by a quick return to the initial eye position in the direction of the rotation. Stimulation of the hair cells in the absence of actual rotation tends to produce an apparent “swimming” of the visual field, often associated with dizziness and nausea. Two sacs or enlargements of the vestibule (the saccule and utricle) react to steady (static) pressures (e.g., those of gravitational forces). Hair cells within these structures, similar to those of the semicircular canal, possess stereocilia and a kinocilium. They also are covered by a gelatinous cap in which are embedded small granular particles of calcium carbonate, called otoliths, that weigh against the hairs. Unusual stimulation of the vestibular receptors and semicircular canals can cause sensory distortions in visual and motor activity. The resulting discord between visual and motor responses and the external space (as aboard a ship in rough waters) often leads to nausea and disorientation (e.g., seasickness). In space flight abnormal gravitational and acceleratory forces may contribute to nausea or disequilibrium. In some diseases (e.g., ear infections), irritation of vestibular nerve endings may cause the affected individual to be subject to falling as well as to spells of disorientation and vertigo. Similar symptoms may be induced by flushing hot and cold water into the outer opening of the ear, since the temperature changes produce currents in the endolymph of the semicircular canals. This effect is used in clinical tests for vestibular functions and in physiological experiments. Externally applied electrical currents may also stimulate the nerve endings of the vestibule. When a current is applied to the right mastoid bone (just behind the ear), nystagmus to the right tends to occur with a reflex right movement of the head; movement tends to the left for the opposite mastoid. Destruction of the labyrinth in only one ear causes vertigo and other vestibular symptoms, such as nystagmus, inaccurate pointing, and tendency to fall. Taste (gustatory) sense The sensory structures for taste are the taste buds, clusters of cells contained in goblet-shaped structures called papillae that open by a small pore to the mouth cavity. A single taste bud contains about 50 to 75 slender taste receptor cells, all arranged in a banana-like cluster pointed toward the gustatory pore. Taste receptor cells, which differentiate from the surrounding epithelium, are replaced by new cells in a turnover period as short as 7 to 10 days. The various types of cells in the taste bud appear to be different stages in this turnover process. Slender nerve fibres entwine among and make contact usually with many cells. Taste buds are located primarily in fungiform (mushroom-shaped), foliate, and circumvallate (walled-around) papillae of the tongue or in adjacent structures of the palate and throat. Many gustatory receptors in small papillae on the soft palate and back roof of the mouth in adults are particularly sensitive to sour and bitter tastes, whereas the tongue receptors are relatively more sensitive to sweet and salty tastes. Some loss of taste sensitivity suffered among denture wearers may occur because of mechanical interference of the dentures with taste receptors on the roof of the mouth. Nerve supply There is no single sensory nerve for taste. The anterior (front) two-thirds of the tongue is supplied by one nerve (the lingual nerve), the back of the tongue by another (the glossopharyngeal nerve), and the throat and larynx by certain branches of a third (the vagus nerve), all of which subserve touch, temperature, and pain sensitivity in the tongue, as well as taste. The gustatory fibres of the anterior tongue leave the lingual nerve to form the chorda tympani, a slender nerve that traverses the eardrum on the way to the brainstem. When the chorda tympani at one ear is cut or damaged (by injury to the eardrum), taste buds begin to disappear and gustatory sensitivity is lost on the anterior two-thirds of the tongue on the same side. The taste fibres from all the sensory nerves from the mouth come together in the medulla oblongata. Here and at all levels of the brain, gustatory fibres run in distinct and separate pathways, lying close to the pathways for other modalities from the tongue and mouth cavity. From the medulla, the gustatory fibres ascend by a pathway to a small cluster of cells in the thalamus and then to a taste-receiving area in the anterior cerebral cortex.
https://www.britannica.com/science/human-sensory-reception/Vestibular-sense-equilibrium
Taste information travels from the taste end organ, taste buds, to gustatory sensory neurons before integrating into central gustatory regions. In taste buds, receptor cells transduce sweet, bitter, or umami taste. Presynaptic cells directly transduce sour taste although they respond to multiple tastes as well via cell-cell communications. Salty taste is believed to be sensed by ENaC-expressing taste cells. Despite this consensus about coding in taste buds, whether information in gustatory sensory neurons is organized following the labeled-line theory or more complex coding mechanisms remains controversial. The geniculate ganglion is a major sensory ganglion of the gustatory system; it innervates fungiform and palatal taste buds. I have developed a novel approach to directly record in vivo calcium activities from neuron ensembles in the geniculate ganglion. This technique employed pirt-GCaMP3 transgenic mice, which express the genetically encoded calcium indicator GCaMP3 in all sensory neurons including geniculate neurons. In the search of how geniculate neurons encode information from taste buds, I have examined neuron responses to a panel of 5 prototypical taste stimuli (representing the five basic tastes) at three different concentrations: low (~ ½ EC50), mid-range (~ EC50) and high (saturating). I have recorded 101 neurons at low concentrations and found that 72% (N=73) of neurons responded to one of the five taste qualities (“specialists”) while 28% (N=28) of neurons responded to multiple taste qualities (“generalists”). The proportion of generalist neurons increased significantly at mid-range concentrations (51%; 79/155, p<0.0001). Consistently, the breadth of tuning of neurons at mid-range concentrations was significantly higher than at low concentrations (unpaired t test, p=0.0002). Furthermore, I have recorded neurons in response to taste stimuli at both low and high concentrations. I found that individual neurons frequently increased their breadth of tuning and specialists at low concentrations could convert to generalists at high concentrations. My observations suggest a more complex coding scheme instead of the labeled-line coding theory. In addition, I found there was no apparent topographical mapping of taste qualities onto the geniculate ganglion. I also examined salty and sour taste transmission in the taste periphery. Albeit that dilute salt (< ~ 150 mM) and high salt (> 200 mM) solutions evoke contrasting behaviors in mammals, I found that there were no separate representations for low and high salt in geniculate neurons. As for sour taste, my study suggested different mechanisms in presynaptic cells to sense weak acid and strong acid (extracellular protons). This is different from the current understanding that both extracellular protons and weak acid depolarize and evoke calcium influx in presynaptic cells. In addition, I found that GABA inhibited citric acid-evoked responses in geniculate neurons, possibly by targeting presynaptic cells. In short, my study substantiates and extends the current knowledge of taste representations in the geniculate ganglion and taste transmission in the taste periphery. Keywords Taste; in vivo calcium imaging; geniculate ganglion Recommended Citation Wu, An, "In Vivo Calcium Imaging Study of the Geniculate Ganglion and the Taste Transmission in the Periphery" (2016). Open Access Dissertations. 1740.
https://scholarlyrepository.miami.edu/oa_dissertations/1740/
Gustatory neurons transmit chemical information from taste receptor cells, which reside in taste buds in the oral cavity, to the brain. As adult taste receptor cells are renewed at a constant rate, nerve fibers must reconnect with new taste receptor cells as they arise. Therefore, the maintenance of gustatory innervation to the taste bud is an active process. Understanding how this process is regulated is a fundamental concern of gustatory system biology. We speculated that because brain-derived neurotrophic factor (BDNF) is required for taste bud innervation during development, it might function to maintain innervation during adulthood. If so, taste buds should lose innervation when Bdnf is deleted in adult mice. To test this idea, we first removed Bdnf from all cells in adulthood using transgenic mice with inducible CreERT2 under the control of the Ubiquitin promoter. When Bdnf was removed, approximately one-half of the innervation to taste buds was lost, and taste buds became smaller because of the loss of taste bud cells. Individual taste buds varied in the amount of innervation each lost, and those that lost the most innervation also lost the most taste bud cells. We then tested the idea that that the taste bud was the source of this BDNF by reducing Bdnf levels specifically in the lingual epithelium and taste buds. Taste buds were confirmed as the source of BDNF regulating innervation. We conclude that BDNF expressed in taste receptor cells is required to maintain normal levels of innervation in adulthood. Footnotes ↵1 The authors declare no competing financial interests. ↵3 This work was supported by National Institutes of Health Grant DC007176 (R.F.K) and DC006938 (David L. Hill). The statistical core facility, used for data analysis, is supported by NIH grant 8P30GM103507. We thank Darlene Burke for statistical support, and Dr David L. Hill and Dr Chengsan Sun for providing us with some tongue tissue. This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.
https://www.eneuro.org/content/2/6/ENEURO.0097-15.2015.abstract
Please use this identifier to cite or link to this item: http://hdl.handle.net/10348/5661 |Title:||Is wine savory? Umami taste in wine| |Authors:||Alice, Vilela| António, Inês Fernanda, Cosme |Keywords:||Grape and wine amino acids| L-glutamate 5’–ribonucleotides savory compounds umami taste perception sensorial properties |Issue Date:||10-Mar-2016| |Publisher:||Sift Desk| |Abstract:||Umami is an important taste element in natural products like wine. The umami taste has distinctive properties that differentiate it from other tastes, including a taste-enhancing synergism between two umami compounds, L-glutamate and 5’-ribonulceotides, and a prolonged aftertaste. In human taste cells, taste buds transduce the chemicals that elicit the umami tastes into membrane depolarization, which triggers release of transmitter to activate gustatory afferent nerve fibers. Umami taste stimuli are primarily received by type II receptor cells which contain the T1R and T2R families of G protein-coupled taste receptors. The taste sensation of umami requires protein hydrolysis which renders free glutamic acid. The main components of the nitrogen fraction of musts and wines are amino acids, peptides, proteins and ammonium ion. Their presence in wine is from amino acids of grapes, enzymatic degradation of grape proteins, excretion by living yeasts at the end of fermentation and to proteolysis during yeast autolysis. Thus, amino acids are important contributors of the wine savory taste and flavor.| |Peer Reviewed:||yes| |URI:||http://hdl.handle.net/10348/5661| |metadata.dc.relation.publisherversion:||www. siftdesk. org| |Document Type:||Article| |Appears in Collections:||DEBA - Artigo publicado em Revista Científica Indexada| Files in This Item: |File||Description||Size||Format| |Is-wine-savory-Umami-taste-in-wine20160310103439.pdf||598,31 kB||Adobe PDF| View/Open Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
https://repositorio.utad.pt/handle/10348/5661
Learninsta presents the core concepts of Biology with high-quality research papers and topical review articles. Sensory Reception and Processing Our senses make us aware of changes that occur in our surroundings and also within our body. Sensation [awareness of the stimulus] and perception [interpretation of the meaning of the stimulus] occur in the brain. Receptors are Classified Based on Their Location: 1. Exteroceptors are located at or near the surface of the body. These are sensitive to external stimuli and receive sensory inputs for hearing, vision, touch, taste and smell. 2. Interoceptors are located in the visceral organs and blood vessels. They are sensitive to internal stimuli. Proprioceptors are also a kind of interoceptors. They provide information about position and movements of the body. These are located in the skeletal muscles, tendons, joints, ligaments and in connective tissue coverings of bones and muscles. Receptors based on the type of stimulus are shown in Table 10.3. Photoreceptor – Eye Eye is the organ of vision; located in the orbit of the skull and held in its position with the help of six extrinsic muscles. They are superior, inferior, lateral, median rectus muscles, superior oblique and inferior oblique muscles. These muscles aid in the movement of the eyes and they receive their nerve innervation from III, IV and VI cranial nerves. Eyelids, eye lashes and eye brows are the accessory structures useful in protecting the eyes. The eye lids protect the eyes from excessive light and foreign objects and spread lubricating secretions over the eyeballs. Eyelashes and the eyebrows help to protect the eyeballs from foreign objects, perspiration and also from the direct rays of sunlight. Sebaceous glands at the base of the eyelashes are called ciliary glands which secrete a lubricating fluid into the hair follicles. Lacrymal glands, located in the upper lateral region of each orbit, secrete tears. Tears are secreted at the rate of 1mL/day and it contains salts, mucus and lysozyme enzyme to destroy bacteria. The conjunctiva is a thin, protective mucous membrane found lining the outer surface of the eyeball (Figure 10.13). The eye has two compartments, the anterior and posterior compartments. The anterior compartment has two chambers, first one lies between the cornea and iris and the second one lies between the iris and lens. These two chambers are filled with watery fluid called aqueous humor. The posterior compartment lies between the lens and retina and it is filled with a jelly like fluid called vitreous humor that helps to retain the spherical nature of the eye. Eye lens is transparent and biconvex, made up of long columnar epithelial cells called lens fires. These cells are accumulated with the proteins called crystalline. The Eye Ball The eye ball is spherical in nature. The anterior one – sixth of the eyeball is exposed; the remaining region is fitted well into the orbit. The wall of the eye ball consists of three layers: firous Sclera, vascular Choroid and sensory Retina (Figure 10.14). The outer coat is composed of dense non-vascular connective tissue. It has two regions: the anterior cornea and the posterior sclera. Cornea is a non-vascular transparent coat formed of stratified squamous epithelium which helps the cornea to renew continuously as it is very vulnerable to damage from dust. Sclera forms the white of the eye and protects the eyeball. Posteriorly the sclera is innervated by the optic nerve. At the junction of the sclera and the cornea, is a channel called ‘canal of schlemm’ which continuously drains out the excess of aqueous humor. Choroid Is highly vascularized pigmented layer that nourishes all the eye layers and its pigments absorb light to prevent internal reflection. Anteriorly the choroid thickens to form the ciliary body and iris. Iris is the coloured portion of the eye lying between the cornea and lens. The aperture at the centre of the iris is the pupil through which the light enters the inner chamber. Iris is made of two types of muscles the dilator papillae (the radial muscle) and the sphincter papillae (the circular muscle). In the bright light, the circular muscle in the iris contract; so that the size of pupil decreases and less light enters the eye. In dim light, the radial muscle in the iris contract; so that the pupil size increases and more light enters the eye. Smooth muscle present in the ciliary body is called the ciliary muscle which alters the convexity of the lens for near and far vision. The ability of the eyes to focus objects at varying distances is called accommodation which is achieved by suspensory ligament, ciliary muscle and ciliary body. The suspensory ligament extends from the ciliary body and helps to hold the lens in its upright position. The ciliary body is provided with blood capillaries that secrete a watery fluid called aqueous humor that fills the anterior chamber. Retina Forms the Inner Most Layer of the Eye and it Contains Two Regions: A sheet of pigmented epithelium (non visual part) and neural visual regions. The neural retina layer contains three types of cells: photoreceptor cells – cones and rods (Figure 10.15 and Table 10.4), bipolar cells and ganglion cells. The yellow flat spot at the centre of the posterior region of the retina is called macula lutea which is responsible for sharp detailed vision. A small depression present in the centre of the yellow spot is called fovea centralis which contains only cones. The optic nerves and the retinal blood vessels enter the eye slightly below the posterior pole, which is devoid of photo receptors; hence this region is called blind spot. Differences between rod and cone cells Mechanism of Vision When light enters the eyes, it gets refracted by the cornea, aqueous humor and lens and it is focused on the retina and excites the rod and cone cells. The photo pigment consists of Opsin, the protein part and Retinal, a derivative of vitamin A. Light induces dissociation of retinal from opsin and causes the structural changes in opsin. This generates an action potential in the photoreceptor cells and is transmitted by the optic nerves to the visual cortex of the brain, via bipolar cells, ganglia and optic nerves, for the perception of vision. Refractive Errors of Eye Myopia (near sightedness): The affected person can see the nearby objects but not the distant objects. This condition may result due to an elongated eyeball or thickened lens; so that the image of distant object is formed in front of the yellow spot. This error can be corrected using concave lens that diverge the entering light rays and focuses it on the retina. Hypermetropia (Long Sightedness): The affected person can see only the distant objects clearly but not the objects nearby. This condition results due to a shortened eyeball and thin lens; so the image of closest object is converged behind the retina. This defect can be overcome by using convex lens that converge the entering light rays on the retina. Presbyopia: Due to aging, the lens loses elasticity and the power of accommodation. Convex lenses are used to correct this defect. Astigmatism Is due to the rough (irregular) curvature of cornea or lens. Cylindrical glasses are used to correct this error (Figure 10.16). Cataract: Due to the changes in nature of protein, the lens becomes opaque. It can be corrected by surgical procedures. Phonoreceptor The ear is the site of reception of two senses namely hearing and equilibrium. Anatomically, the ear is divided into three regions: the external ear, the middle ear and internal ear. The external ear consists of pinna, external auditory meatus and ear drum. The pinna is flap of elastic cartilage covered by skin. It collects the sound waves. The external auditory meatus is a curved tube that extends up to the tympanic membrane [the ear drum]. The tympanic membrane is composed of connective tissues covered with skin outside and with mucus membrane inside. There are very fine hairs and wax producing sebaceous glands called ceruminous glands in the external auditory meatus. The combination of hair and the ear wax [cerumen] helps in preventing dust and foreign particles from entering the ear. The middle ear is a small air-filled cavity in the temporal bone. It is separated from the external ear by the eardrum and from the internal ear by a thin bony partition; the bony partition contains two small membrane covered openings called the oval window and the round window. The Middle Ear Contains Three Ossicles: Malleus [hammer bone], incus [anvil bone] and stapes [stirrup bone] which are attached to one another. The malleus is attached to the tympanic membrane and its head articulates with the incus which is the intermediate bone lying between the malleus and stapes. The stapes is attached to the oval window in the inner ear. The ear ossicles transmit sound waves to the inner ear. A tube called Eustachian tube connects the middle ear cavity with the pharynx. This tube helps in equalizing the pressure of air on either sides of the ear drum. Inner ear is the fluid filled cavity consisting of two parts, the bony labyrinth and the membranous labyrinths. The bony labyrinth consists of three areas: cochlea, vestibule and semicircular canals. The cochlea is a coiled portion consisting of 3 chambers namely: scala vestibuli and scala tympani – these two are filled with perilymph; and the scala media is filled with endolymph. At the base of the cochlea, the scala vestibule ends at the ‘oval window’ whereas the scala tympani ends at the ‘round window’ of the middle ear. The chambers scala vestibuli and scala media are separated by a membrane called Reisner’s membrane whereas the scala media and scala tympani are separated by a membrane called Basilar membrane (Figure 10.17) Organ of Corti The organ of Corti (Figure.10.18) is a sensory ridge located on the top of the Basilar membrane and it contains numerous hair cells that are arranged in four rows along the length of the basilar membrane. Protruding from the apical part of each hair cell is hair like structures known as stereocilia. During the conduction of sound wave, stereocilia makes a contact with the stiff gel membrane called tectorial membrane, a roof like structure overhanging the organ of corti throughout its length. Mechanism of Hearing Sound waves entering the external auditory meatus fall on the tympanic membrane. This causes the ear drum to vibrate, and these vibrations are transmitted to the oval window through the three auditory ossicles. Since the tympanic membrane is 17-20 times larger than the oval window, the pressure exerted on the oval window is about 20 times more than that on the tympanic membrane. This increased pressure generates pressure waves in the fluid of perilymph. This pressure causes the round window to alternately bulge outward and inward meanwhile the basilar membrane along with the organ of Corti move up and down. These movements of the hair alternately open and close the mechanically gated ion channels in the base of hair cells and the action potential is propagated to the brain as sound sensation through cochlear nerve. Defects of Ear Deafness may be temporary or permanent. It can be further classified into conductive deafness and sensory-neural deafness. Possible causes for conductive deafness may be due to - The blockage of ear canal with earwax - Rupture of eardrum - Middle ear infection with fluid accumulation - Restriction of ossicular movement. In sensory-neural deafness, the defect may be in the organ of Corti or the auditory nerve or in the ascending auditory pathways or auditory cortex. Organ of Equilibrium Balance is part of a sense called proprioception, which is the ability to sense the position, orientation and movement of the body. The organ of balance is known as the vestibular system which is located in the inner ear next to the cochlea. The vestibular system is composed of a series of fluid filled sacs and tubules. These sacs and tubules contain endolymph and are kept in the surrounding perilymph (Figure 10.19). These two fluids, perilymph and endolymph, respond to the mechanical forces, during changes occurring in body position and acceleration. The utricle and saccule are two membranous sacs, found nearest the cochlea and contain equilibrium receptor regions called maculae that are involved in detecting the linear movement of the head. The maculae contain the hair cells that act as mechanorecptors. These hair cells are embeded in a gelatinous otolithic membrane that contains small calcareous particles called otoliths. This membrane adds weight to the top of the hair cells and increase the inertia. The canals that lie posterior and lateral to the vestibule are semicircular canals; they are anterior, posterior and lateral canals oriented at right angles to each other. At one end of each semicircular canal, at its lower end has a swollen area called ampulla. Each ampulla has a sensory area known as crista ampullaris which is formed of sensory hair cells and supporting cells. The function of these canals is to detect rotational movement of the head. Oldfactory Receptors The receptors for taste and smell are the chemoreceptors. The smell receptors are excited by air borne chemicals that dissolve in fluids. The yellow coloured patches of oldfactory epithelium form the oldfactory organs that are located on the roof of the nasal cavity. The oldfactory epithelium is covered by a thin coat of mucus layer below and oldfactory glands bounded connective tissues, above. It contains three types of cells: supporting cells, Basal cells and millions of pin shaped oldfactory receptor cells (which are unusual bipolar cells). The oldfactory glands and the supporting cells secrete the mucus. The unmyelinated axons of the oldfactory receptor cells are gathered to form the filaments of oldfactory nerve [cranial nerve I] which synapse with cells of oldfactory bulb. The impulse, through the oldfactory nerves, is transmitted to the frontal lobe of the brain for identification of smell and the limbic system for the emotional responses to odour. Gustatory Receptor: The sense of taste is considered to be the most pleasurable of all senses. The tongue is provided with many small projections called papillae which give the tongue an abrasive feel. Taste buds are located mainly on the papillae which are scattered over the entire tongue surface. Most taste buds are seen on the tongue (Figure 10.20) few are scattered on the sof palate, inner surface of the cheeks, pharynx and epiglottis of the larynx. Taste buds are flask-shaped and consist of 50 – 100 epithelial cells of two major types. Gustatory epithelial cells (taste cells) and Basal epithelial cells (Repairing cells) Long microvilli called gustatory hairs project from the tip of the gustatory cells and extends through a taste pore to the surface of the epithelium where they are bathed by saliva. Gustatory hairs are the sensitive portion of the gustatory cells and they have sensory dendrites which send the signal to the brain. The basal cells that act as stem cells, divide and differentiate into new gustatory cells (Figure 10.20). Skin-Sense of Touch Skin is the sensory organ of touch and is also the largest sense organ. This sensation comes from millions of microscopic sensory receptors located all over the skin and associated with the general sensations of contact, pressure, heat, cold and pain. Some parts of the body, such as the finger tips have a large number of these receptors, making them more sensitive. Some of the sensory receptors present in the skin (Figure 10.21) are: Tactile Merkel Disc Is light touch receptor lying in the deeper layer of epidermis. Hair Follicle Receptors Are light touch receptors lying around the hair follicles. Meissner’s Corpuscles Are small light pressure receptors found just beneath the epidermis in the dermal papillae. They are numerous in hairless skin areas such as finger tips and soles of the feet. Pacinian Corpuscles Are the large egg shaped receptors found scattered deep in the dermis and monitoring vibration due to pressure. It allows to detect different textures, temperature, hardness and pain. Ruffi Endings Which lie in the dermis responds to continuous pressure. Krause End Bulbs Are thermoreceptors that sense temperature.
https://ncertmcq.com/sensory-reception-and-processing/
Solutions of table salt (NaCl) elicit several tastes, including of course saltiness but also sweet, sour, and bitter. This brief review touches on some of the mileposts concerning what is known about taste transduction for the Na+ ion, the main contributor to saltiness. Electrophysiological recordings, initially from single gustatory nerve fibers, and later, integrated impulse activity from gustatory nerves led researchers to predict that Na+ ions interacted with a surface molecule. Subsequent studies have resolved that this molecule is likely to be an epithelial sodium channel, ENaC. Other Na+ transduction mechanisms are also present in taste buds but have not yet been identified. The specific type(s) of taste cells responsible for salt taste also remains unknown.
https://miami.pure.elsevier.com/en/publications/the-taste-of-table-salt
- Author: Kathy Keatley Garvey Some linger quite awhile before they buzz off. Have you ever thought about this: Do they have taste buds? A colleague asked that question. In fact, it was his friend's nine-year-old son who asked: "Do bees have taste buds, and if so where?" "No," says Extension apiculturist emeritus Eric Mussen of the UC Davis Department of Entomology and Nematology, who retired in 2014 after 38 years of service. That's the short answer. But wait, there's more. "Honey bees, and other insects do not have taste buds, as such," Mussen said. "They have specialized, enlarged hairs; chaetic and basiconic sensillae; that protrude from the cuticle (exoskeleton). The sensillae have gustatory receptor cells in them that sense the chemicals that are contacted by the tips of the antennae, the mouthparts, or the tarsi (feet) of the front legs. The interpretation of the chemicals takes place in the subesophageal ganglion of the bee, not in the brain. The esophageal ganglion is a very large nerve cell cluster attached beneath the brain." It's good to see youngsters so interested in insects!
https://ucanr.edu/blogs/blogcore/postdetail.cfm?postnum=22124&
Gustducin is a G protein associated with taste and the gustatory system, found in some taste receptor cells. Research on the discovery and isolation of gustaducin is recent. It is known to play a large role in the transduction of bitter, sweet and umami stimuli. Its pathways (especially for detecting bitter stimuli) are many and diverse. An intriguing feature of gustducin is its similarity to transducin. These two G proteins have been shown to be structurally and functionally similar, leading researchers to believe that the sense of taste evolved in a similar fashion to the sense of sight. Gustducin is a heterotrimeric protein composed of the products of the GNAT3 (α-subunit), GNB1 (β-subunit) and GNG13 (γ-subunit). Gustducin was discovered in 1992 when degenerate oligonucleotide primers were synthesized and mixed with a taste tissue cDNA library. The DNA products were amplified by the polymerase chain reaction method, and eight positive clones were shown to encode the α subunits of G-proteins, (which interact with G-protein-coupled receptors). Of these eight, two had previously been shown to encode rod and cone α-transducin. The eighth clone, α-gustducin, was unique to the gustatory tissue. Upon analyzing the amino-acid sequence of α-gustducin, it was discovered that α-gustducins and α-transducins were closely related. This work showed that α-gustducin's protein sequence gives it 80% identity to both rod and cone a-transducin. Despite the structural similarities, the two proteins have very different functionalities. However, the two proteins have similar mechanism and capabilities. Transducin removes the inhibition from cGMP Phosphodiesterase, which leads to the breakdown of cGMP. Similarly, α-gustducin binds the inhibitory subunits of taste cell cAMP Phosphodiesterase which causes a decrease in cAMP levels. Also, the terminal 38 amino acids of α-gustducin and α-transducin are identical. This suggests that gustducin can interact with opsin and opsin-like G-coupled receptors. Conversely, this also suggests that transducin can interact with taste receptors. The structural similarities between gustducin and transducin are so great that comparison with transducin were used to propose a model of gustducin's role and functionality in taste transduction. Other G protein α-subunits have been identified in TRCs (e.g. Gαi-2, Gαi-3, Gα14, Gα15, Gαq, Gαs) with function that has not yet been determined. While gustducin was known to be expressed in some taste receptor cells (TRCs), studies with rats showed that gustducin was also present in a limited subset of cells lining the stomach and intestine. These cells appear to share several feature of TRCs. Another study with humans brought to light two immunoreactive patterns for α-gustducin in human circumavallate and foliate taste cells: plasmalemmal and cytosolic. These two studies showed that gustducin is distributed through gustatory tissue and some gastric and intestinal tissue and gustducin is presented either in the cytoplasm or in apical membranes on TRC surfaces. Research showed that bitter-stimulated type 2 taste receptors (T2R/TRB) are only found in taste receptor cells positive for the expression of gustducin. α-Gustducin is selectively expressed in ∼25–30% of TRCs Due to its structural similarity to transducin, gustducin was predicted to activate a phosphodiesterase (PDE). Phosphodieterases were found in taste tissues and their activation was tested in vitro with both gustducin and transducin. This experiment revealed transducin and gustducin were both expressed in taste tissue (1:25 ratio) and that both G proteins are capable of activating retinal PDE. Furthermore, when present with denatonium and quinine, both G proteins can activate taste specific PDEs. This indicated that both gustducin and transducin are important in the signal transduction of denatonium and quinine. The 1992 research also investigated the role of gustducin in bitter taste reception by using “knock-out” mice lacking the gene for α-gustducin. A taste test with knock-out and control mice revealed that the knock-out mice showed no preference between bitter and regular food in most cases. When the α-gustducin gene was re-inserted into the knock-out mice, the original taste ability returned. However, the loss of the α-gustducin gene did not completely remove the ability of the knock-out mice to taste bitter food, indicating that α-gustducin is not the only mechanism for tasting bitter food. It was thought at the time that an alternative mechanism of bitter taste detection could be associated with the βγ subunit of gustducin. This theory was later validated when it was discovered that both peripheral and central gustatory neurons typically respond to more than one type of taste stimulant, although a neuron typically would favor one specific stimulant over others. This suggests that, while many neurons favor bitter taste stimuli, neurons that favor other stimuli such as sweet and umami may be capable of detecting bitter stimuli in the absence of bitter stimulant receptors, as with the knock-out mice. Until recently, the nature of gustducin and its second messengers was unclear. It was clear, however, that gustducin transduced intracellular signals. Spielman was one of the first to look at the speed of taste reception, utilizing the quenched-flow technique. When the taste cells were exposed to the bitter stimulants denatonium and sucrose octaacetate, the intracellular response - a transient increase of IP3 - occurred within 50-100 millisecond of stimulation. This was not unexpected, as it was known that transducin was capable of sending signals within rod and cone cells at similar speeds. This indicated that IP3 was one of the second messengers used in bitter taste transduction. It was later discovered that cAMP also causes an influx of cations during bitter and some sweet taste transduction, leading to the conclusion that it also acted as a second messenger to gustducin. When bitter-stimulated T2R/TRB receptors activate gustducin heterotrimers, gustducin acts to mediate two responses in taste receptor cells: a decrease in cAMPs triggered by α-gustducin, and a rise in IP3(Inositol trisphosphate) and diacylglycerol (DAG) from βγ-gustducin. Although the following steps of the α-gustducin pathway are unconfirmed, it is suspected that decreased cAMPs may act on protein kinases which would regulate taste receptor cell ion channel activity. It is also possible that cNMP levels directly regulate the activity of cNMP-gated channels and cNMP-inhibited ion channels expressed in taste receptor cells. The βγ-gustducin pathway continues with the activation of IP3 receptors and the release of Ca2+ followed by neurotransmitter release. Bitter taste transduction models Several models have been suggested for the mechanisms regarding the transduction of bitter taste signals. It is thought[by whom?] that these five diverse mechanisms have developed as defense mechanisms. This would imply that many different poisonous or harmful bitter agents exist and these five mechanisms exist to prevent humans from eating or drinking them. It is also possible that some mechanisms can act as backups should a primary mechanism fail. One example of this could be quinine, which has been shown to both inhibit and activate PDE in bovine taste tissue. There are currently two models proposed for sweet taste transduction. The first pathway is a GPCRGs-cAMP pathway. This pathway starts with sucrose and other sugars activating Gs inside the cell through a membrane-bound GPCR. The activated Gas activates adenylyl cyclase to generate cAMP. From this point, one of two pathways can be taken. cAMP may act directly to cause an influx of cations through cAMP- gated channels or cAMP can activate protein kinase A, which causes the phosphorylation of K+ channels, thus closing the channels, allowing for depolarization of the taste cell, subsequent opening of voltage-gated Ca2+ channels and causing neurotransmitter release. The second pathway is a GPCR-Gq/Gβγ-IP3 pathway which is used with artificial sweeteners. Artificial sweeteners bind and activate GPCRs coupled to PLCβ2 by either α-Gq or Gβγ. The activated subunits activate PLCβ2 to generate IP3 and DAG. IP3 and DAG elicit Ca2+ release from the endoplasmic reticulum and cause cellular depolarization. An influx of Ca2+ triggers neurotransmitter release. While these two pathways coexist in the same TRCs, it is unclear how the receptors selectively mediate cAMP responses to sugars and IP3 responses to artificial sweeteners. Of the five basic tastes, three (sweet, bitter and umami tastes) are mediated by receptors from the G protein-coupled receptor family. Mammalian bitter taste receptors (T2Rs) are encoded by a gene family of only a few dozen members. It is believed that bitter taste receptors evolved as a mechanism to avoid ingesting poisonous and harmful substances. If this is the case, one might expect different species to develop different bitter taste receptors based on dietary and geographical constraints. With the exception of T2R1 (which lies on chromosome 5) all human bitter taste receptor genes can be found clustered on chromosome 7 and chromosome 12. Analyzing the relationships between bitter taste receptor genes show that the genes on the same chromosome are more closely related to each other than genes on different chromosomes. Furthermore, the genes on chromosome 12 have higher sequence similarity than the genes found on chromosome 7. This indicates that these genes evolved via tandem gene duplications and that chromosome 12, as a result of its higher sequence similarity between its genes, went through these tandem duplications more recently than the genes on chromosome 7. Recent work by Enrique Rozengurt has shed some light on the presence of gustducin in the stomach and gastrointestinal tract. His work suggests that gustducin is present in these areas as a defense mechanism. It is widely known that some drugs and toxins can cause harm and even be lethal if ingested. It has already been theorized that multiple bitter taste reception pathways exist to prevent harmful substances from being ingested, but a person can choose to ignore the taste of a substance. Ronzegurt suggests that the presence of gustducin in epithelial cells in the stomach and gastrointestinal tract are indicative of another line of defense against ingested toxins. Whereas taste cells in the mouth are designed to compel a person to spit out a toxin, these stomach cells may act to force a person to spit up the toxins in the form of vomit.
https://db0nus869y26v.cloudfront.net/en/Gustducin
Background: Chemosensory disorders affect approximately 15% of the U.S. population and an estimated 200,000 individuals visit a doctor each year for problems with their ability to taste or smell (NIDCD). Among the common causes of taste problems are radiation therapy, chemotherapy, exposure to certain chemicals and medications, head trauma and surgical injuries. Tastants are detected by taste buds, which are specialized collections of cells. Taste bud development and innervation has been an active research front and several key molecules involved in these processes have been elucidated. Neurotrophins, in particular brain-derived neurotrophic factor (BDNF) and neurotrophin-3 (NT-3) were among the first to be identified as playing a role in taste buds. BDNF and NT-3 are expressed in developing and adult rodent tongues in a temporospatially specific manner. BDNF mRNA is found in the gustatory epithelium during development and in adult taste buds, NT-3 mRNA in the surrounding epithelium in rodents. Neurotrophins are also expressed in a temporospatially specific manner during tooth morphogenesis. Nerve growth factor (NGF), BDNF and glial cell line-derived neurotrophic factor (GDNF) are expressed in developing rodent teeth. Aims: To examine the expression of mRNA encoding neurotrophic factors in the developing human taste system and teeth, to assess the role of neurotrophic factors in the formation and innervation of taste buds and teeth, and to explore possible consequences of neurotrophic factor expression in cultured dental pulp cells (DPCs). Results: Neurotrophic factor expression patterns are described in the developing human tongue and compared to those of rodents. BDNF was found in the first-trimester in the same areas as in rodents; developing gustatory epithelium and taste buds, and in additional areas such as the subepithelial mesenchyme. Human NT-3 mRNA expression patterns were largely similar to those of rodents, except that taste buds also expressed NT-3 mRNA during development and in adults. In both rodents and humans, BDNF was expressed prior to innervation of gustatory papillae, and thus serves as a very early marker of the gustatory epithelium. Our study showed wider expression patterns of both BDNF and NT-3 in the human gustatory system (paper I) compared to rodents. Next, we showed that taste papillae in BDNF/NT-3 double KO mice were smaller and less innervated compared to BDNF-/- mice, indicating specific gustatory roles for both neurotrophins (paper V). Studies of developing human teeth showed that NGF, BDNF, NT-3, neurotrophin-4 (NT-4), GDNF and neurturin (NTN) were expressed in the tooth organ and surrounding mesenchyme (paper III). Interactions of neurotrophic factors from the dental pulp and trigeminal, motor and dopamine (DA) neurons were analyzed. DPCs promoted survival and neurite outgrowth from trigeminal neurons in cocultures, and prolonged neural survival in vitro. DPCs also promoted motoneuron survival in a rodent model of spinal cord injury (paper II), as well as the survival of embryonic DA neurons in vitro (paper IV). BDNF is the main neurotrophic factor in the gustatory system, but NT-3 plays a role as well in both humans and rodents, which knockout studies were able to detect. The tooth provides an excellent model to study molecular events in cells during organ formation, and to examine how neurotrophic factors promote innervation during development. I. Nosrat IV, Lindskog S, Seiger A, Nosrat CA. Lingual BDNF and NT-3 mRNA expression patterns and their relation to innervation in the human tongue: Similarities and differences compared with rodents. Journal of Comparative Neurology 2000, 417:133-52. II. Nosrat IV, Widenfalk J, Olson L, Nosrat CA. Dental pulp cells produce neurotrophic factors, interact with trigeminal neurons in vitro, and rescue motoneurons after spinal cord injury. Developmental Biology 2001, 238:120-32. III. Nosrat I, Seiger A, Olson L, Nosrat CA. Expression patterns of neurotrophic factor mRNAs in developing human teeth. Cell & Tissue Research 2002, 310:177-87. IV. Nosrat IV, Smith CA, Mullally P, Olson L, Nosrat CA. Dental pulp cells provide neurotrophic support for dopaminergic neurons and differentiate into neurons in vitro; implications for tissue engineering and repair in the nervous system. European Journal of Neuroscience 2004, 19:2388-98. V. Nosrat IV, Agerman K, Marinescu A, Ernfors P, Nosrat CA. Lingual deficits in neurotrophin double knockout mice. Journal of Neurocytology 2004, 33:607-15.
https://openarchive.ki.se/xmlui/handle/10616/46086
Ahead of the upcoming G7 summit at Schloss Elmau, governments, international and regional organizations, multilateral development banks, non-governmental organizations and philanthropists gathered in Berlin today to unite for food security – to take stock of progress made in joint efforts to overcome the global food security crisis, and to join forces to move forward in this common endeavour. Reports from the United Nations Secretary-General’s Global Food, Fuel and Financial Crisis Task Force paint a dramatic picture: of the 1.7 billion people in 107 states affected by this crisis, 1.2 billion people will be exposed to a perfect storm of the three dimensions of the crisis – limited finances, sharply rising food prices and rising energy prices. This is on top of intense droughts in places like the Horn of Africa and the use of hunger as a weapon of war in various conflict zones around the world. Participants noted with grave concern that Russia’s invasion of Ukraine is endangering the food security and nutrition of millions of women, children and men and further aggravating the already dire global food security, caused among other things by armed conflicts, climate change and the consequences of the COVID-19 pandemic. The participants shared the conviction that this multidimensional crisis requires a joint and effective global response combining diplomacy, humanitarian aid, development cooperation as well as agricultural and food policies. Discussions were guided by the belief that short- and medium-term support must be programmed in a way that leads to long-term sustainable transformation of agricultural and food systems. It must strengthen resilience and thus reduce humanitarian needs, stimulate sustainable local production, diversify crops and thus reduce dependence on imports. The participants expressed their readiness to assume their responsibilities and continue to cooperate closely to achieve a common goal. I. Stocktaking of progress made Participants took stock of progress made since February 2022 in efforts to alleviate the global food security crisis. They welcomed the leadership of the UN Secretary-General in coordinating efforts to overcome the crisis through the Global Crisis Response Group on Food, Energy and Finance. They also welcomed the G7’s response to the crisis, including the establishment of the Global Alliance for Food Security and substantive preparations for the upcoming G7 summit. The Global Alliance for Food Security is designed to be a key platform for fostering cooperation, guided by the shared belief that governments, international organizations, multilateral development banks, civil society, the private sector, scientific and philanthropic organizations must work together to weather this storm. . They commended the initiatives taken by the African Union for the eradication of hunger and food insecurity in Africa under the Senegalese presidency, recalling the theme of the African Union for the year 2022 Strengthening food resilience. Nutrition and Food Security on the African Continent and the Comprehensive Africa Agriculture Programme. Participants renewed their commitment to the roadmap agreed to under the Call to Action for Global Food Security hosted by the United States in New York on May 18, 2022. Participants called on other countries to sign the roadmap and continue to implement its commitments. They took note of the importance of the Food and Agricultural Resilience Mission (FARM) announced by France, and recalled the sensitization of Mediterranean countries through the Mediterranean Ministerial Dialogue on the food crisis organized by Italy on June 8 2022. Participants looked forward to addressing the issue of food security as an essential component of agricultural, social, economic and environmental development under Indonesia’s G20 Presidency. They recalled the G20 Matera Declaration on Food Security, Nutrition and Food Systems promoted under the previous Italian G20 Presidency. Participants welcomed the Action Plan of international financial institutions to combat food insecurity and the commitment of multilateral development banks to increase and accelerate political and financial support to countries and households vulnerable to the food crisis. Food Safety. They stressed the need to sustainably increase local agricultural production in affected countries, in line with a transition to sustainable agricultural and food systems. Referring to the 2021 UN Food Systems Summit, participants highlighted the need to continue the transformation of agriculture and food systems with greater emphasis on greater sustainability. The main objective remains to achieve the Sustainable Development Goals by 2030. II. Go forward Participants pledged to support the UN Secretary-General’s efforts to alleviate the global food security crisis through the Global Food, Energy and Finance Crisis Response Group. As a global crisis requires a global response, they are committed to forging strong partnerships within the Global Alliance for Food Security and beyond to ensure no one is left behind. The Global Alliance for Food Security and its working groups will help ensure a cohesive international response to the food security crisis and follow up on commitments made by Global Alliance participants. The participants called on Russia to immediately end the war in Ukraine, end its threats and blockade of Ukrainian ports and all other activities that hamper Ukrainian food production and exports, endangering lives millions of people around the world. In the short term, participants pledged to support the humanitarian system wherever possible by providing emergency humanitarian assistance to people at risk of food insecurity, including by strengthening contributions to the World Food Program and other humanitarian actors. and ensuring respect for humanitarian principles in all measures taken. in response to Russian aggression against Ukraine. Furthermore, participants agreed on the need to apply an appropriate balance between humanitarian and development activities, depending on the operational context and needs, and in line with the Humanitarian Development Peace Nexus. In addition, participants agreed on the key role played by all Rome-based UN agencies – FAO, IFAD and WFP – in leading the international community’s efforts to address food insecurity. Participants stressed the importance of refraining from inappropriate measures that restrict trade and avoiding unjustified measures, such as export bans on foodstuffs or fertilizers, which increase market volatility and threaten global food security and nutrition. They pledged to continue their support for Ukraine to maintain its agricultural production, storage, transportation and processing, and to help Ukraine and its neighbors rapidly develop additional export routes for agricultural products. They recognized the need, while doing this, to work on additional and new solutions to prevent the grain from being wasted. Participants committed to continue their work on the necessary transformation towards sustainable agriculture and food systems and to support improvements in the global governance of agriculture and food systems, strengthening the role of the Committee on World Food Security as a platform – inclusive and intergovernmental global form to ensure safety and nutrition for all. The Global Agriculture and Food Security Program is an inclusive, flexible and demand-driven multilateral financing instrument with a proven track record of coordinating country-level development initiatives to support these efforts. They highlighted the importance of the progressive realization of the human right to adequate food as well as Sustainable Development Goal 2 (zero hunger by 2030). Everyone should have the opportunity to realize this right. Civil society organizations have expressed their willingness to put their experience to use in developing adequate long-term solutions to achieve this objective. Participants pledged to promote sustainable consumption and increased local production in line with the 2030 Agenda for Sustainable Development, including the reduction of food loss and waste. Participants shared the view that farmers need to adapt to climate change to maintain food security. Moreover, sustainable agricultural production should even contribute to global climate protection, contribute to biodiversity, avoid negative impacts on the environment and strengthen the implementation of agroecological and regenerative practices. They highlighted the need for better quality locally adapted seeds and more efficient use of fertilizers, including non-fossil fertilizers, as well as access to digital options for farmers. In order to be better prepared and to mitigate the implications of the next crisis, the participants showed their readiness to strengthen information sharing and early warning capacities, including the provision of additional means. They pledged to focus on the goal of sustainable agricultural and food systems transformation. A strong and effective multilateral system will be essential to achieving our goals.
https://oupsie.info/berlin-ministerial-conference-united-for-world-food-security-conclusions-of-the-presidents-world/
The UN has called for a stronger response by governments, aid organizations and the private sector to address the devastating impact the El Niño climate event is having on the food security, livelihoods, nutrition and health of some 60 million people around the world. The appeal came at a meeting organized in Rome by four UN agencies, the Food and Agriculture Organization of the United Nations, the International Fund for Agricultural Development (IFAD), the Office for Coordination of Humanitarian Affairs (OCHA) and the World Food Programme (WFP). Participants, including representatives from governments, non-governmental organizations and other UN agencies, took stock of the growing impacts of the current El Niño, which is considered as one of the strongest in history. They noted that more than $2.4 billion is needed for current El Niño emergency and recovery-responses and currently there is a $1.5 billion gap in funding. El Niño-related impacts have been felt across the globe since mid-2015. This includes severe or record droughts in Central America, the Pacific region, East Timor, Vietnam, Ethiopia, and in Southern Africa. In addition, floods have affected parts of Somalia and Tanzania, devastating forest fires have resurfaced in Indonesia while some regions have witnessed storms, as in the case of Fiji with Tropical Cyclone Winston. These disasters have cumulatively resulted in a wide range of consequences, most importantly, severe increases in hunger, malnutrition, water- and vector-borne diseases and the prevalence of animal and plant pests and diseases. Increasingly, populations are on the move: families across the globe are being forced into distress migration, both within and across borders, as their sources of livelihood disappear. The meeting underscored the fact that although the 2015-2016 El Niño has peaked, it will continue to influence temperature and rainfall patterns causing extreme events in different parts of the world posing continuing risks to health, water supply and food security, while the numbers of those threatened by hunger as a result are expected to grow. These effects could last for long after the El Nino phenomenon has subsided. Long-term impacts include higher malnutrition rates - some 1 million children are currently in need of treatment for severe malnutrition in Eastern and Southern Africa - and an increase in poverty, rendering vulnerable households less resilient to future shocks, and stalling countries' progress in achieving the Sustainable Development Goals. People relying on livestock for their livelihoods are particularly vulnerable given the long time frame required to rebuild herd numbers decimated by drought. Sparse or absent rains also result in a loss of soil productivity and greater land degradation, factors that contribute to desertification. The meeting ended with a series of commitments by FAO, IFAD, OCHA and WFP aimed at urgently scaling up responses to the current El Niño crisis while also ensuring a more effective response to similar events in the future. These agencies committed to working closely with resource partners to help address the funding gap, including prioritizing geographical areas requiring urgent attention. They also pledged to work with governments, aid organizations, other development partners, as well as the private sector, to assist worst hit populations, including scaling up of existing social protection schemes. They also agreed on the need to better build the capacity of national governments to mitigate and respond to future El Niño and other climate-related events, as well as work with development partners to ensure that disaster risk reduction projects are stepped up in the most vulnerable areas. This article was published by FAO.
https://www.unocha.org/story/un-agencies-urge-stronger-coordinated-international-response-el-ni%C3%B1o-fao
Abstract After the breakup of the Soviet Union, the republics of Central Asia began to restructure their agricultural sectors to achieve food security and to adjust to the requirements of a market economy. Although they encountered many common challenges, their agricultural policies differed significantly. For this reason, it is important to see the results of these policies and to learn lessons from them. This paper discusses the role of and the challenges facing rangelands and livestock production systems in achieving food security among the pastoral communities of Central Asia. It analyzes the trends in livestock development during the economic transition in Kazakhstan, the Kyrgyz Republic, and Uzbekistan, and derives policy directions for the sustainable use of rangelands and for the growth of the livestock sector in Central Asia. Subject - keywords Animal production.
http://ebrary.ifpri.org/cdm/ref/collection/p15738coll5/id/2093/
Working for food security and sustainable development in the face of crises and overlapping challenges The past two years have been a watershed, profoundly transforming all spheres of our lives. Fortunately, science has helped us better understand and cope with the challenges brought about by COVID-19. Meanwhile, we also witnessed how the pandemic affected production, trade, logistics and the consumption of goods – including food and other agricultural products. The United Nations and its agencies have worked hard to protect the health and safety of people and the planet, encouraging governments to find ways to build back better. In particular, the Food and Agriculture Organization of the United Nations (FAO) has advocated for transformed agrifood systems that are more efficient, more inclusive, more resilient and more sustainable, to achieve the Four Betters: better production, better nutrition, a better environment and a better life for all, leaving no one behind. This call for the transformation of our agrifood systems has echoed around the world. The United Nations Food Systems Summit in September 2021 was a key step on the path towards this transformation, encouraging all countries to innovate to ensure resilience to the climate crisis, natural disasters and conflicts. Also in 2021, FAO Members agreed on the FAO Strategic Framework for 2022–31 that articulates the Organization’s vision for a sustainable and food-secure world for all in the context of the 2030 Agenda and the Sustainable Development Goals (SDGs). This strategic document became even more important in early 2022, when global food security was impacted by yet another crisis. Each passing day the war in Ukraine is negatively affecting global food security. Ukraine and the Russian Federation are key pillars of global markets. They are important suppliers of agricultural commodities (wheat, maize, barley and sunflower) and other staple inputs, including fertilizers. Combined, the Russian Federation and Ukraine account for around 30 percent of global wheat exports and 20 percent of maize exports. Shortages will likely extend into next year. According to FAO estimates, at least 20 percent of Ukraine’s winter crops – wheat, most notably – may not be harvested, and farmers in Ukraine will likely miss the May planting season. This will further reduce the global food supply, with serious implications for the Europe and Central Asia region and beyond. Nearly 50 low-income, food-deficit countries in Africa and the Near East depend heavily on Ukrainian and Russian grain supplies. Food prices were already on the rise due to concerns over crop conditions, export availabilities and price inflation in the energy, fertilizer and feed sectors. As the war in Ukraine sent shocks through markets for staple grains and vegetable oils, food prices soared even higher, reaching a historic peak in March. Immediate – and, above all, joint – coordinated actions and policy responses are needed to mitigate the impacts of ongoing food security challenges, and FAO has a critical role to play in this regard. It is crucial that food and fertilizers flow uninterrupted. Agricultural production and trade should continue to supply domestic and global markets, and supply chains should protect standing crops, livestock, food processing infrastructure and logistical systems. FAO strongly advises that the Agricultural Market Information System (AMIS) be strengthened as an existing platform for food market transparency and coordinated policy action in times of market uncertainty. Furthermore, countries in Europe and Central Asia – and throughout the world – should improve their efficiency and productivity in managing natural resources, to not only lower the costs of agricultural production, but to also empower innovation capacity. This is especially crucial when it comes to exported goods. Better management of natural resources is a cornerstone of sustainable development. To this end, achieving the SDGs, as outlined in the Organization’s Strategies on Climate Change, and on Science & Innovation, is at the core of the FAO Strategic Framework 2022-31. To support the achievement of these goals and to respond to the interconnected challenges, FAO has launched the Regional Technical Platform on Green Agriculture, which provides a digital and user-friendly gateway for sharing information on mainstreaming the green agenda. An international conference to be held on 6 May in Baku, Azerbaijan, will focus on these topics. Finally, we must increase the resilience of livelihoods. The most vulnerable depend on agriculture and natural resources for their livelihood, and they are usually the hardest hit by shocks and disasters. By working together with governments, partners and communities – before, during and after disasters - FAO is in a unique position to support Members in building more resilient and food-secure futures by linking prevention, preparedness and rehabilitation for sustainable development, and helping farmers and rural economies become more agile, efficient and innovative. Without losing the focus on our strategic goals, FAO actively responds to emergencies to alleviate the effect of conflicts on human lives and livelihoods. The world has never been more interconnected. Conflicts in one region echo in all corners of the globe, and their ramifications are grave for food security and all other development aspirations.
https://www.fao.org/georgia/news/detail-events/en/c/1507007/
Sheikh Mohammed noted that it is also a key requirements for the comprehensive sustainable development to further boost the UAE's ranking in the global food security indices, hence, turn it into a global hub for food security that is based on innovation. His Highness’s comment come as he reviewed the results of Government Accelerators Working Groups to adopt Modern Agricultural Technology. "Achieving food security is key for the UAE Government. We want the UAE to be pioneer in food security through the development of a comprehensive and sustainable work process for food security that use future technology in finding solutions for the challenges related to securing food sources. We are aware that achieving food security to our society is a key pillar for our comprehensive sustainable journey," His Highness said. He added, "We want a better future for the next generations, where every member of the society will receive a secured healthy food. We want to develop the tools and solutions to overcome the challenges related to food security, and create solution and models and process to improve agriculture sector in the country, and provide trusted sustained resources." Sheikh Mohammed was briefed on the 10 strategic initiatives that were developed by Government Accelerators team to adopt Modern Agricultural Technology, and aims to develop creative solutions to increase the efficiency and competitiveness of locally produced food products, through providing supports and attracting investments to set up sustainable agriculture projects that ensure food security for the UAE, and support economic growth. Accompanying Sheikh Mohammed were Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, Crown Prince of Dubai; Sheikh Maktoum bin Mohammed bin Rashid Al Maktoum, Deputy Ruler of Dubai; and Sheikh Mansour bin Mohammed bin Rashid Al Maktoum.
https://www.sharjah24.ae/en/uae/387319-mohammed-bin-rashid-uae-is-keen-to-achieve-food-security-
By Ronald Joshua ROME (IDN) - The United States Agency for International Development (USAID) and the Food and Agriculture Organization of the United Nations (FAO) have signed a $15 million agreement aimed at boosting the capacity of developing countries to track key agricultural data – information that is essential to good policymaking and that will help track progress toward achieving the Sustainable Development Goals (SDGs). FAO said in a news release on September 7, the USAID donation will cover the first phase of an FAO-led project that will run from 2016 to 2021, starting with pilot efforts in four developing countries – two in sub-Saharan Africa, one in Latin America and one in Asia. A dialogue is under way with eligible countries. The goal of the project is to design and implement a new and cost-effective approach to agricultural data collection in developing world contexts, known as agricultural integrated surveys (AGRIS). FAO said that the AGRIS methodology will not only capture improved annual data on agricultural production, but also broader and more detailed structural information relating to farms, including employment, machinery use, production costs, farming practices, and environmental impacts. It will incorporate recent innovations like remote sensing, GPS, mobile technology and various uses of "big data". These tools will introduce more objective approaches to measuring agricultural performance, in some cases replacing traditional, more expensive methods. In addition to better and more detailed data, AGRIS will also promote the integration of disparate data sources, improve data timeliness and usability, and cut data collection costs. “The end result,” according to FAO, “will be high-quality data on a wide range of technical, economic, environmental and social dimensions of agriculture that will help governments analyse and understand the impacts of agricultural policies, assess progress toward the SDGs and other goals, and shape better policies.” The need for better, cost-effective and timely statistical data for agricultural and rural areas is widely recognized. Critical gaps in data production and dissemination persist in several countries – a consequence of long-standing issues such as shortages of financial and human resources, and the resulting limitations in technical capacities, the UN agricultural agency said. FAO has already been addressing such issues through the "Global Strategy to improve agricultural and rural statistics" (GSARS) programme, an umbrella effort working to enhance the capacity of developing countries to produce and use agricultural and rural statistics and to strengthen statistical governance mechanisms. AGRIS is a spin-off of the Global Strategy's research programme. "With efforts like our Global Strategy and ‘next generation' tools like AGRIS, we're engaging with partners to spark what we hope will be a new era in agricultural data collection," FAO Chief Statistician Pietro Gennari said. AGRIS is being implemented by FAO within the context of the multi-agency Global Rural and Agricultural Integrated Surveys (GRAInS) Partnership, which is currently establishing in Rome a Global Survey Hub. “Strong national data systems are critical for governments and private sector actors to make informed and smart decisions that foster food security and economic prosperity,” the Assistant to the Administrator for USAID’s Bureau for Food Security, Beth Dunford, said in the FAO news release. FAO Director-General José Graziano da Silva said: "In the decades to come, humanity will need to produce more food for a growing population using natural resources such as water, land and biodiversity in a sustainable way – while coping with the challenges imposed by climate change." He added: "Our ability to boost food yields sustainably and meet the SDG hunger eradication target will hinge on the availability of better, cost-effective and timely statistical data for agriculture and rural areas." The 17 SDGs of the 2030 Agenda for Sustainable Development – adopted by world leaders in September 2015 – officially came into force on January 1, 2016. Over the next fifteen years, with the aim of achieving the SDGs, countries will mobilize efforts to end all forms of poverty, fight inequalities and tackle climate change, while ensuring that no one is left behind. In particular, Goal 2 of the SDGs is centred on ending hunger, achieving food security, improving nutrition, and promoting sustainable agriculture. According to FAO, Goal 2 recognizes the interlinkages among supporting sustainable agriculture, empowering small farmers, promoting gender equality, ending rural poverty, ensuring healthy lifestyles, tackling climate change, and other issues. [IDN-InDepthNews – 07 September 2016] Photo: Agro-forestry farmers tend to their crops in Kigoma, Tanzania. Forests are an integral part of the national agriculture policy with the aim of protecting arable land from erosion and increasing agricultural production. Photo: FAO/Simon Maina IDN is the flagship of International Press Syndicate.
https://archive-2016-2017.indepthnews.net/index.php/global-governance/un-insider/662-usaid-helps-fao-track-progress-in-development-goal-2
Lessons from the modern history show that visionary policies and rapid technological change can achieve agricultural productivity growth and poverty reduction. However, lessons from the past are not sufficient to motivate action in today’s world. The grand modern challenges, such as climate change, soil degradation, and growing competition for land and other resources add more layers of complexity to decision making and require rigorous analysis of the decisions’ potential outcomes. PIM’s Flagship 1 Technological Innovation and Sustainable Intensification assesses alternative scenarios for future food security to identify their inherent challenges, analyzes technological solutions that could address those challenges, and examines the associated public policies and investments in science and innovation required to implement the solutions. What are the key socioeconomic and biophysical drivers of change in agrifood systems? What challenges do these drivers present to achievement of sustainable food and nutrition security at global, regional, and national scales? How can agricultural technologies, natural resource management practices, and infrastructure investments address these challenges in ways that manage trade-offs, protect natural capital, and sustain the provision of ecosystem services? How do investments by governments, the private sector, and other nongovernmental actors in agricultural research and development affect agricultural productivity growth and poverty reduction in developing countries? What are the implications of these investments for outcomes in developing countries and for the global agrifood systems? What alternative policies, investments, institutional mechanisms, and market-based incentives can accelerate innovation, and specifically the discovery, development and delivery of new technology products and services for agriculture in developing countries? How should extension and other programs be designed to include women and young people as service providers and clients, and will greater inclusiveness accelerate diffusion and adoption of technology? This flagship takes a global perspective that transcends a single crop, commodity, technology, or agroecological system. Applications of the scenario analysis and work on innovation systems are regional and national, with current engagement in Africa south of the Sahara (Benin, Botswana, Burkina Faso, Cote D’Ivoire, Ethiopia, Ghana, Kenya, Malawi, Mali, Niger, Nigeria, Senegal, Tanzania, Uganda, Zambia, and Zimbabwe), Asia (Bangladesh, China, India, Indonesia, Laos, Myanmar, Nepal, Pakistan, Philippines, Thailand, Vietnam), Latin America and the Caribbean (Brazil, Colombia, Nicaragua, Peru), and the Middle East and North Africa (Egypt and Tunisia). The global perspective of Flagship 1 allows CGIAR and global partners to assess priorities over a horizon of several decades, and to position work accordingly. National applications allow national leaders to see scenarios for their own countries in a regional and global context, to identify priorities for national agricultural research, to assess the level of investment in agricultural research required, and to consider institutional reforms that will allow investments in science to earn high returns. A few well-established projects and programs led by IFPRI and external partners are part of PIM within Flagship 1, including Agricultural Science and Technology Indicators (ASTI), Global Futures and Strategic Foresight (GFSF), HarvestChoice. Global, with current engagement in Africa south of the Sahara, Asia, Latin America and the Caribbean, and the Middle East and North Africa.
https://pim.cgiar.org/research/f1/
Lipper L. [ 2004 ] One major step towards achieving food security in developing countries is to improve their ability to achieve seed security. While seed supply channels of commercial agriculture are usually operational even during emergencies, the seed system of subsistence crops, although resilient, [...] Emerging challenges for food and nutrition policy in developing countries Kostas G. Stamoulis, Prabhu Pingali, Prakash Shetty [ 2004 ] As the income and the average caloric intake of developing country populations increase, a relative shift in diets is taking place. The general pattern of change can be described as a shift towards more “westernized” diets and away from traditional [...] Agricultural policy indicators Timothy Josling, Alberto Valdés [ 2004 ] This paper outlines a methodological approach for use by FAO to collect, analyze and monitor agricultural policy indicators (API) for developing countries. The aim is to establish a consistent and comparable set of policy indicators, allowing analysts to examine whether [...] Resource abundance, poverty and development Erwin H. Bulte, Richard Damania, Robert T. Deacon [ 2004 ] The negative correlation between resource endowments and GDP growth remains one of the most robust findings in the empirical growth literature, and has been coined the “resource curse hypothesis”. The policy consequences of this result are potentially far reaching. If [...] Conflicts, rural development and food security in West Africa Margarita Flores [ 2004 ] This paper examines food security in the context of conflict in West Africa. The analysis developed in the paper recognises the importance of defining conflict type and the trends in conflict so that conflict and post-conflict policies may be [...] The food security role of agriculture in Ethiopia Berhanu Adenew [ 2004 ] This study analyses income, expenditure and food consumption data in Ethiopia to help explains the country’s high probability of national food consumption shortfalls. The study argues that to reach the goal of increased national food security, it is necessary to [...] Positive externalities of agriculture on mountain tourism in Morocco Khalil Allali [ 2004 ] This study uses hedonic pricing techniques to estimate the value of agricultural amenities in Morocco’s High Atlas Mountains. The analysis is limited to positive externalities related to land use, providing indicators to better inform policy decisions effecting rural and agricultural [...] Valuation methods for environmental benefits in forestry and watershed investment projects Romina Cavatassi [ 2004 ] The understatement or omission of the environmental costs and benefits associated with forest management options results in project evaluations and policy prescriptions that are less than socially optimal. The aim of this paper is to examine the full range of [...] « Previous 1 ... 46 47 48 49 50 51 52 ...
http://www.fao.org/economic/esa/publications/by-type/en/?page=49&ipp=10&tx_dynalist_pi1%5Bpar%5D=YToxOntzOjE6IkwiO3M6MToiMCI7fQ==
In partnership with The Technical Centre for Agricultural and Rural Cooperation (CTA) and the Panafrican Farmers Organization (PAFO), Global Open Data for Agriculture & Nutrition (GODAN) has published a new discussion paper called the Data Revolution for Agriculture on how open data can empower small-scale farmers. " With the emergence of low-cost, readily available open data, knowledge is no more the privilege for a few, but a right for everyone… a tool that can enable everyone to tap into global intellectual capital” (Foreward by Andre Laperriere, GODAN Executive Director). “In one way or another the concept of open data – especially in agriculture, is going to be answering some big questions about the future of global food security, and undoubtedly the benefits of data-sharing in agriculture are most keenly felt by farmers and rural communities”(Namibian media picks up on Forum for Open Data). However, the real challenge lies in ensuring that access to quality data is widely available and linked to local solutions for improving food security and nutrition” (Data Revolution for Agriculture). through information” (Foreward by Andre Laperriere, GODAN Executive Director). The Data Revolution for Agriculture discussion paper has been produced jointly by the Technical Centre for Agricultural and Rural Cooperation (CTA) and the Global Open Data for Agriculture and Nutrition (GODAN) with contributions from the Pan African Farmers’ Organization. In particular, the paper is based on CTA’s extensive hands-on experience working with small and mediumscale farmers in the most remote corners of the world and encompasses a series of studies, reports, and background documents previously produced by CTA on the potential impact and use of agricultural open-data policies and practices across the ACP region. the paper stresses that open data can transform the lives of rural populations, stimulate economic growth and in turn, lead the world ahead of the food security challenges ahead of it. The Executive summary comprises the definitions of the following key concepts: Open data, Closed data, Shared data, Data devolution, Real-time digital data, Big data also called the Five Vs. Afterwards the common features of big data for development are introduced. These data could/should be: digitally generated, passively produced, automatically collected, geographically or temporally (time-related) trackable, continuously analyzed. The Executive summary explains also the concept of Data blending method, which is used to extract value from multiple data sources. This process can also help to discover correlations between different data sets without the time and expense of traditional data warehouse processes. For instance at CTA, the documents’ collection is linked to the Food and Agriculture Organization (FAO) AGROVOC system for keywords and geocodes for places to help people navigate through CTA’s collection of nearly 50,000 documents online. introduces the reader into the open data concept and explains why data (and different data types) matter. The data revolution concept is shaped with a view both to sustainable development goals (SDG) and to private, open and big data for agriculture. According to FAO (2009) estimates, to feed a world’s population of 9 billion in 2050, food production will need to increase by 70%. In this scenario, accurate agricultural information is essential for achieving sustainable agriculture and food security. provides an overview of data for: policy, agricultural development (knowledge problem in agricultural statistics, knowledge and governance problems, data revolution in African agriculture) and for investment. Moreover this Chapter focuses on the potential impact of open data on the smallholder, the connection between agricultural open data and ICT tools, implications for big data in agriculture (the leading role of the private sector and potential benefits for small-scale farmers), and precision agriculture. When promoting the use of open data in agriculture and rural development, it is important to adapt a driven, inclusive process allowing the full engagement of local communities and the creation of new businesses for a broad range of stakeholders. Finally, the Chapter presents some samples of sources of agricultural data (agricultural census enumeration areas, farm registers from the agricultural census, farm registers based on administrative sources such as business registrations or tax collections, area sample frames), and explains how participatory data can help communities to build datasets. describes the following issues and challenges around data: data quality, governance , protection and privacy, data disaggregation, cyber security risks, digital divide and capacity development, data rights, timing, data usability. “The quest for open data described in these chapters demonstrate not simply how the world will better feed itself, but how everyone’s quality of life will be improved in the process. CTA’s approach to open data is one of an advocate, but a critical one, emphasizing the need for wisdom and discernment. For a sustainable and most beneficial development of our collective agricultural future, we must together strive to make sound and relevant data available, understood and put to use. That is the challenge proposed in the Data Revolution for Agriculture." (Foreward by Andre Laperriere, GODAN Executive Director). “the data revolution can be a revolution for equality ... mobilizing the data revolution for achieving sustainable development urgently requires such a standard setting, building on existing initiatives in various domains .... open data and digital rights management and licensing”(Data Revolution Report, 2014). “Only 2 out of 44 countries in sub-Saharan Africa are considered to have high standards in data collection… Instead of writing large grants, spending days travelling to remote field sites, hiring and training enumerators, and dealing with inevitable survey hiccups, what if instead you could sit at home … and, with a few clicks of a mouse, download the data you needed to study the impacts of a particular program or intervention?” (The Data Revolution for Agriculture paper, 2016). Who really benefits from the opening of data, also with regards to sustainable development (in agriculture)? How to ensure that people, organizations and even governments are not left behind or even excluded from the data revolution? If you have a story to tell or experience to share about improving nutrition and agriculture with open data, register on AIMS portal and submit your idea, add your comment or write your thoughts!
http://aims.fao.org/es/activity/blog/data-revolution-agriculture-putting-data-work-farmers-and-delivering-benefits
“In the Name of Allah, the Most Compassionate, the Most Merciful, Mr. Antonio Guterres, Secretary-General of the United Nations, Ladies and Gentlemen, At the outset, I would like to express my sincere thanks to UN Secretary-General Antonio Guterres for his initiative to call for this important summit and his keenness to hold it as scheduled virtually without delay despite all the COVID-19 related difficulties and challenges we grapple with daily. Ladies and Gentlemen, Today’s summit comes at a delicate and pivotal moment for our world; thus, we need to exert more efforts to address the complex challenges we have to face. You might agree that the creation of sustainable food systems that achieve food security for our societies is a top-priority issue for us all, particularly amid the exacerbation of climate change as well as the resulting rise in temperatures and water scarcity. This is in addition to climate change-induced soil degradation and desertification of significant areas of agricultural land and the resulting economic, social and food security consequences as well as the complex political situation. Consequently, many world regions, particularly Africa, are grappling with the threat of famine. Thus, quick and effective solutions are needed to save millions of people — the vast majority of which are women and children — from this existential threat that limits the ability of countries and governments to fully implement Sustainable Development Goals (SDGs). Egypt has hence realized early that this summit represents a favorable opportunity for coming up with ideas and solutions to these challenges. It can also play a role in promoting international cooperation and mobilizing the necessary funds. Accordingly, Egypt has stepped up efforts to play its role, both nationally and regionally, in the preparatory process for the summit. At the national level, Egypt has started a comprehensive national dialogue since December 2020, a dialogue that includes all concerned government institutions, representatives of the private sector, and civil society organizations. As a result, they agreed to adopt a national document to transform into a healthy and sustainable food system. Egypt also joined the global “School Meals Coalition”, believing in the importance of providing healthy food for female and male students and the centrality of ensuring that international partnerships contribute to achieving this goal. Egypt has hence become one of the top countries that have moved forward in implementing this program in the region. At the regional level, Egypt has engaged in formulating a unified African position that reflects the priorities of the peoples of the continent and the particularities of their food security challenges during the summit. We intend to continue working with our African brothers to face these challenges, in an effort to expedite the implementation of the African Union's Agenda 2063. Ladies and Gentlemen, I am confident that the rich deliberations and discussions that will take place at our summit will contribute to supporting our work toward achieving food security for our citizens as well as ensuring the realization of the right to food as one of the fundamental rights of peoples. Hence, I see that our success today depends on our ability to come up with results that contribute to formulating a feasible, sustainable and ambitious food system – one that takes into account the peculiarities and priorities of each country without imposing specific visions or models. Results should also provide the required support through the development of creative financing mechanisms and effective international cooperation that brings countries together with the UN parties and development partners. Furthermore, effective and flexible follow-up mechanisms shall be devised nationally and internationally in furtherance of our desired goals and in fulfillment of our legitimate aspirations to meet the needs of our peoples.
https://sis.gov.eg/Story/159231/Speech-by-President-Abdel-Fattah-El-Sisi-at-the-United-Nations-Food-Systems-Summit-2021?lang=en-us
Sahel Consulting and the Syngenta Foundation for Sustainable Agriculture Host Convening on Reorienting Public Agriculture Research and Development in Nigeria On Wednesday, October 27th, 2021, Sahel Consulting Agriculture and Nutrition Limited and the Syngenta Foundation for Sustainable Agriculture (SFSA) hosted a convening on Reorienting Future Public Agriculture and Food Research and Development in Nigeria for Achieving Sustainable, Nutritious and Climate-Resilient Food Systems. The convening was organized as a hybrid event, with physical participants in Abuja, Nigeria and virtual participants from other parts of Nigeria and the world. The convening was hosted as part of the efforts under a country-level policy study on public agriculture research and development in Nigeria, commissioned by the Syngenta Foundation for Sustainable Agriculture, headquartered in Switzerland, and led by Sahel Consulting in Nigeria. The convening brought together stakeholders within the public and private sectors and development and donor landscape to disseminate the findings from the study and discuss key strategies on how public research activities and funding can be reoriented and supported to be more closely focused on innovations that can tackle gaps in the agri-food system. In her welcome address, Yuan Zhou, the Head of Agricultural Policy at Syngenta Foundation for Sustainable Agriculture emphasized the crucial role of demand-driven agricultural research in supporting the innovation process to transform food systems to address climate change, nutrition, and sustainability-related issues to achieve food and nutrition security for Nigeria’s growing population. Representing the Executive Secretary of the Agricultural Research Council of Nigeria (ARCN), Prof. Garba Sharubutu, Dr. Umar Umar, Technical Adviser to the Executive Secretary delivered the goodwill message. He applauded the timely intervention of the study as it coincides with the assent of the ARCN Amendment Bill by President Muhammadu Buhari on the 8th of October 2021. He also mentioned that the ARCN would review the recommendations of the study, to identify areas of implementation to ensure stronger coordination of agricultural research efforts in Nigeria, to achieve food and nutrition security. Representing the Permanent Secretary of the Federal Ministry of Agriculture and Rural Development (FMARD), Mrs. Patience Yamah, Deputy Director of Appointment, Promotions and Discipline at FMARD, delivered the keynote address. She stressed the importance of agricultural research and development for national food security and emphasized the need for future agriculture R&D to focus on themes that guarantee food safety and security, improved nutrition, and health of the population, while also providing jobs, income, and revenue for both the citizens of the country and the government. She highlighted that the reorientation of agriculture research and development must focus on areas such as capacity building of actors to conduct innovative research, increased research focus on high yielding climate-resilient seed varieties to withstand climate change variability and veterinary services to guarantee the quality of food available for consumption, use of technology to build resilient food systems, mechanization, and support for supply chain management in the sector to reduce food loss and waste and improve food supply. She called for collaboration and synergy among agricultural research institutes and colleges of agriculture, increased support for extension services, and engagement with end-users during the research process, to obtain end-user preferences and ensure that research is demand-driven. The convening also featured a panel discussion facilitated by Lord Paul Boateng, a Board Member of the Syngenta Foundation for Sustainable Agriculture and included key stakeholders across the agriculture research and development landscape in Nigeria as panelists. Discussing the greatest challenge to be addressed in the agricultural research and development landscape, Prof. Lucky Omoigui, a Seed System Specialist at International Institute of Tropical Agriculture (IITA), Kano, highlighted factors such as the low adoption of varieties, low crop yield, lack of technology and mechanization, inadequate extension services, unavailability of funding, misuse of available funds, and lack of a clear policy direction for holistically addressing system challenges as key areas of concern. Dr. Anthony Job, the Group Head, Technical at Value Seeds Limited, Zaria highlighted the low adoption rates of improved varieties as a key challenge and suggested the need for improved extension services in the sector to ensure the delivery of research technologies to farmers. Dr. Audu Grema, Senior Program Officer, Agriculture at the Bill and Melinda Gates Foundation Nigeria stated the misalignment of governance in the research institutions as a challenge and recommended increased collaboration between research institutes and the industry to address the current needs and challenges of end-users. Regarding funding for agricultural research and development, he suggested the establishment of a similar scheme to the Tertiary Education Trust Fund (TETFUND) in the agricultural research landscape, where agricultural companies are required by law to support agricultural research with a percentage of their profits. Dr. Ubi Ikpi, the Head of Partnerships and Donor Projects Unit at the Agricultural Research Council of Nigeria, highlighted the gap between research, extension, and the end-users as a major challenge and recommended increased and intentional collaboration among actors in the research system to bridge the existing gap and avoid duplication of research efforts. Prof. Happiness Oselebe, a Professor of Plant Genetics and Breeding and Deputy Vice-Chancellor (Administration) at Ebonyi State University, highlighted low funding for research and poor collaboration between research institutions and educational institutions as challenges. She recommended the establishment of a central database for agriculture research in the country, to enable access to information on research results and findings to identify research gaps, improve collaboration among researchers and avoid duplication of efforts. Panelists and participants also agreed on the need for participatory and demand-driven research, with greater involvement of the private sector, both through funding efforts and technical support for agriculture research, to ensure the development of sustainable research solutions. In her closing remarks, Mrs. Ndidi Nwuneli (MFR), Managing Partner, Sahel Consulting Agriculture and Nutrition Ltd, reemphasized the critical role of agriculture research and development for data-driven decision making by the government, the important role of the private sector to drive investment and for non-profit organizations to engage in the process. She stated the need for collaboration among actors across the public and private sectors, civil society, and development landscape to foster agriculture research and development in Nigeria and charged actors to act urgently to deliver impact. Mr. Simon Winter, Executive Director of the Syngenta Foundation for Sustainable Agriculture, in his final remarks, restated the need for research to be driven by end-users and emphasized the need for a linkage between funding for agriculture research and the current need of end-users to achieve sustainable impact. In conclusion, he urged stakeholders to continue to generate awareness for this important topic and act urgently to advocate for the reorientation of future agricultural research and development in Nigeria in a manner that addresses gaps within the agriculture and food system and supports sustainable agriculture.
https://sahelconsult.com/reorienting-public-agriculture-research-and-development-in-nigeria/
New approaches and sustainable partnerships are required to ensure food security for Africa’s rapidly urbanising population, says financial services provider Absa senior agricultural economist Wessel Lemmer. According to a January 2017 report by The Sustainable Development Goals Centre for Africa, the continent boasts 65% of the world’s arable land and food demand is expected to rise by more than 60% by 2050, owing to population growth. Lemmer points out that the continent’s expected population growth might cause its food security challenges to become a chronic issue, particularly owing to challenges of insufficient property rights, agricultural investment and policy, as well as regulatory uncertainty that leads to escalating food prices. “Achieving sustainable food security for urban and rural citizens remains an important priority for governments across the continent.” Food security requires a balance between availability and affordability, as well as coordinated partnerships among stakeholders in the agriculture sector, he notes, adding that exploring innovative approaches to farming is becoming increasingly important. One such innovation, urban agriculture, is “considered to be on the cusp of advancements within the sector” and might be a significant contributor to sustainable access to nutritious food sources in the future, says Lemmer. “Simple shifts can result in better efficiencies and environment-friendly produce that are less prone to climatic changes and which ultimately have a positive influence on production yields.” He adds that community gardens and cooperatives can also play a key role in supplementing household budgets and, more importantly, in adding a wider range of vitamins and minerals to consumers’ diets. With half of Africa’s population under the age of 25, of which 72% will seek employment opportunities, Lemmer asserts that agriculture can be a key contributor to not only job creation for unemployed youth but also to economic development on the continent. “These statistics are only set to grow,” he says, noting that, in the next 20 years, more than 330-million Africans are expected to enter the job market. Meanwhile, he points out that countries such as Nigeria have moved towards a more intensified and commercialised product system to cope with growing food demand. Lemmer says the shift from traditional to modern farming methods has been assisted through the increased adoption of modern systems and irrigation, as well as genetically modified seed and fertiliser inputs. Production research is another avenue through which African nations can alleviate crop and livestock vulnerability, which needs greater investment from many countries on the continent, he notes. South Africa is the continent’s most developed and profitable agricultural producer, boasting a capital-intensive and export-orientated agriculture sector typified by large-scale commercial operations that cover about 86% of the cropland. “However, South Africa’s agriculture sector is also taking strain from insufficient investment in agricultural infrastructure, research and development, as well as education and training programmes, particularly for farmers who are starting out,” he says. Lemmer notes that, while Africa has made significant strides through innovations in financial payments and telecommunications, the continent is yet to realise its potential to provide grains and proteins for its citizens and those in other parts of the world. “If agriculture is to improve its contribution to economic development and the achievement of sustainable and secure food supply systems on the continent, key stakeholders have to be much more coordinated in their partnerships within not only countries but also the various regions,” he concludes.
http://www.engineeringnews.co.za/article/new-approaches-and-sustainable-partnerships-needed-to-ensure-africas-food-security-agricultural-economist-2017-08-18
Decision to register agricultural land leases in Abu Dhabi ABU DHABI - H.H. Sheikh Mansour bin Zayed Al Nahyan, Deputy Prime Minister, Minister of Presidential Affairs and Chairman of Abu Dhabi Agriculture and Food Safety Authority, has highlighted the importance of regulating agricultural practices and standard operating procedures as a key step towards ensuring the sector’s sustainability, enhancing the food ecosystem and achieving food security strategic goals. Mansour bin Zayed said, "The decision to register agricultural land lease contracts in Abu Dhabi complements the sector's legislative infrastructure, ensuring the optimal use of farms, enhancing agricultural and livestock production, and boosting the income of farm owners." Sheikh Mansour added, "Supporting the sector's legislative infrastructure will encourage investment in agriculture and food production and, therefore, develop the food ecosystem and supply chains by supporting local agricultural production and enhancing its competitiveness." He commended the efforts of both Department of Municipalities and Transport (DMT) and Abu Dhabi Agriculture and Food Safety Authority (ADAFSA), in addition to all relevant local and federal stakeholders, in supporting the development of agriculture and livestock production in Abu Dhabi. He emphasised the importance of collaboration among the competent authorities to support comprehensive development processes in Abu Dhabi as a key milestone to build a knowledge-based sustainable economy and optimise non-oil revenues and contribution to gross domestic product (GDP). DMT has issued decision No.85 of 2021 to register agricultural land leases in Abu Dhabi. Each respective municipality will register farm leases, following applicable laws and regulations, after meeting the requirements. The lessee should be a legal entity and the farm should be used in line with approved agricultural activities, specified by ADAFSA. The lease should also be approved by ADAFSA after settling the applicable fees. Falah Al Ahbabi, Chairman of DMT, extended his gratitude to the UAE leadership for their continuous support and follow-up to develop a solid and comprehensive national ecosystem, saying, "The UAE leadership’s food security vision is setting an example in contributing to a sustainable agricultural sector through introducing innovative solutions to sustain food production using advanced technologies that will develop local agricultural production and supply chains and therefore support our national strategic food stockpile." Al Ahbabi said developing further investment mechanisms in agriculture would contribute to increasing operational effectiveness and ensuring resources sustainability. He added that this approach was a key component of DMT’s strategy to optimise the use of agricultural assets, in cooperation with all relevant stakeholders, and regulate farming investment and agricultural land leases as per sustainable development international best practices. Saeed Al Bahri Salem Al Ameri, Director-General of ADAFSA, said, "The decision to register farm leases is an important milestone to diversify revenues generated from agriculture, increase the income of farm owners and livestock breeders and optimise the use of agricultural lands." He added, "The decision will also contribute to attracting more investments in the agricultural sector and enabling owners to outsource development of farms to qualified investors through legally registered and certified contracts that will protect the rights of contracting parties." "This decision is a key step to optimise the agricultural resources and serve the goals of ensuring food security and a sustainable agricultural sector," he concluded. © Copyright Emirates News Agency (WAM) 2021.
https://www.zawya.com/mena/en/legal/story/Decision_to_register_agricultural_land_leases_in_Abu_Dhabi-WAM20210921064134021/
Telomeres are nucleprotein structures that cap the chromosomal ends, conferring genomic stability. Alterations in telomere maintenance and function are associated with tumorigenesis. In chronic lymphocytic leukemia (CLL), telomere length is an independent prognostic factor and short telomeres are associated with adverse outcome. Though telomere length associations have been suggested to be only a passive reflection of the cell’s replication history, here, based on published findings, we suggest a more dynamic role of telomere dysfunction in shaping the disease course. Different members of the shelterin complex, which form the telomere structure have deregulated expression and POT1 is recurrently mutated in about 3.5% of CLL. In addition, cases with short telomeres have higher telomerase (TERT) expression and activity. TERT activation and shelterin deregulation thus may be pivotal in maintaining the minimal telomere length necessary to sustain survival and proliferation of CLL cells. On the other hand, activation of DNA damage response and repair signaling at dysfunctional telomeres coupled with checkpoint deregulation, leads to terminal fusions and genomic complexity. In summary, multiple components of the telomere system are affected and they play an important role in CLL pathogenesis, progression, and clonal evolution. However, processes leading to shelterin deregulation as well as cell intrinsic and microenvironmental factors underlying TERT activation are poorly understood. The present review comprehensively summarizes the complex interplay of telomere dysfunction in CLL and underline the mechanisms that are yet to be deciphered. Introduction Telomeres are repetitive DNA sequences at the ends of the chromosomes that play a pivotal role in maintaining genomic stability by capping and protecting the ends from degradation and fusions. Maintenance of telomere length is a key for immortalization in cancers. In chronic lymphocytic leukemia (CLL), telomere length has been identified as an independent prognostic factor in various studies. In addition, the deregulation of different telomere components has a profound influence on the CLL pathomechanisms. The present review is thus aimed at summarizing the clinical and biological aspects of telomere shortening, mutations and deregulated expression of telomere associated genes, and mechanisms that are important for telomerase activation in CLL, to pave way for a deeper understanding of telomere dysfunction in CLL pathogenesis. Telomeres—Structure and Function All eukaryotic chromosomes have specialized nucleo-protein structures called telomeres which cap the ends. The nucleic acid component of the telomeres comprises of long tracts of DNA repeat sequences, ending with a 3’ single stranded DNA overhang. In mammals, the telomere sequences consist of TTAGGG hexamers, repeated many kilo bases in length (1). In somatic cells, a part of the DNA sequence is lost at the ends of the chromosomes during each cell division due to the end-replication problem (2, 3). The telomeres at the chromosomal ends thus serve as buffer preventing loss of vital genetic information. The telomeric repeats are associated with a six-subunit protein complex called shelterin consisting of TRF1, TRF2, TIN2, TPP1, POT1, and RAP1. TRF1 and TRF2 bind directly to the double stranded telomere sequence and POT1 binds to the 3’ single stranded overhang. TIN2 and TPP1 link TRF1 and TRF2 and POT1 while RAP1 binds solely to TRF2 (4). The 3’ telomere overhang at the chromosomal ends loops to form the T-loop by strand invasion. The T-loop structure along with the shelterin prevents the chromosomal ends from being recognized as DNA damage, conferring genomic stability (5). In stem cells, germ cells, and in various cancers, the telomere length is maintained, most commonly by the reverse transcriptase enzyme, telomerase (TERT). It is an RNA-dependent DNA polymerase that uses the telomerase RNA component (TERC) as a template to synthesize the telomeric DNA (1). Thus in somatic cells that lack telomerase expression, telomere shortening beyond a critical length activate the senescence checkpoints, beyond which the cells cannot proliferate in the absence of an active telomere length maintenance mechanism. Activation of telomerase is considered as one of the hallmarks of malignant transformation (6). In addition, certain neoplasms undergo telomerase-independent alternative lengthening of telomeres (ALT), a recombination dependent pathway that utilizes telomeres of adjacent chromosomes as template for elongation and maintenance of critical telomere length (7, 8). In CLL, deregulations of various components of the telomere machinery such as length of telomeres, telomerase, and shelterin expression, and recurrent, activating POT1 mutations point to a global telomere dysfunction that plays an important role in disease pathogenesis and evolution. Telomere Dysfunction and Tumorigenesis The primary role of telomeres is to confer genomic stability. The shelterin complex shields the telomeres from activation of the DNA damage response signaling at the telomeres. In particular, TRF2 of the sheltein complex is important to prevent activation of the ATM (9) and subsequently non-homologous end joining (NHEJ) (10, 11) while POT1 suppresses ATR signaling (12) activation at the telomeres. Critical telomere shortening leads to uncapping of the ends and activation of senescence checkpoints. This is an important tumor suppressor mechanism that functions to eliminate potentially harmful, pre-malignant clones. Progressive shortening of telomeres by knocking out Terc and crossing through generations G1 to G6 by knocking out Terc led to increased incidences of spontaneous malignancies and decreased stress response and survival (13). Dysfunctional telomeres lead to intra or inter chromosomal end fusions resulting in the formation of dicentric chromosomes that undergo breakage at the anaphase. This phenomenon is known as breakage-fusion-bridge (BFB) cycle which leads to genomic complexity. Evidences of such BFB events were found in many different cancer types (14, 15). Using murine models it was further demonstrated that loss of checkpoint genes such as TP53 along with telomere dysfunction led to development of cancers due to non-reciprocal translocations caused by BFB events (16). Of note, length of telomeres within a cell substantially varies between the different chromosomes and it was identified that the presence of one or more critically short telomeres and not the average telomere length dictates cellular senescence versus proliferation (17). Though the activation of telomerase or ALT mediated telomere maintenance is important for cellular immortalization and cancer, a large study with 18,430 samples from tumor and normal tissues from 31 different cancer types identified telomere length of the tumor tissue to be shorter than the corresponding normal tissue in majority of the cancer types (18). In line with this, numerous studies on telomere length associations have shown that CLL tumors have significantly shorter telomere length but higher telomerase expression and activity compared to normal B-cells. Thus in cancers, the genomic instability associated with telomere dysfunction may promote selection of fit clones which bypass the senescence checkpoints promoting tumorigenesis while activation of telomerase or ALT serves to maintain the minimal telomere length to overcome senescence and sustain cell survival. Methodology for Analysis of Telomere Length in Clinical Samples Various techniques have been used for the assessment of telomere length in CLL. Telomere length analyzed by telomere restriction fragment (TRF) analysis is considered to be the gold standard. The method includes the process of using a restriction enzyme that does not detect the telomere repeat sequence to digest the non-telomeric DNA, followed by resolution on a gel and southern hybridization (19, 20). Even though the method is highly reproducible, TRF analysis of telomere length has many limitations. Telomere length analyzed using TRF may substantially vary depending on the restriction enzymes used to digest the non-telomeric DNA (21). Additionally, TRF method is not capable of reliably analyzing very short telomeres due to the requirement of hybridization with a probe. The method is low throughput and requires micrograms of DNA. Since the restriction enzymes might not effectively digest the telomere-associated sequences (TAS) that are adjacent to the telomeres, the method usually overestimates the telomere length of a sample (22). Over the years, newer and high-throughput methods for estimation of telomere length were developed, which made analysis of larger patient cohorts easier. Fluorescence in situ hybridization (FISH) using fluorescence labelled (CCCTAA)n telomere binding probes are used for analyzing telomere length, where the intensity of the signal directly corresponds to the length of the telomere sequence in a given sample. The method when coupled with chromosomal banding is a valuable tool for analyzing telomere length of individual chromosomes. FISH based telomere length measurements could be made high-throughput by using flow cytometry (flow-FISH) (23). Another advantage of flow-FISH is that it can be used to analyze telomere lengths of different cell sub-populations within a given sample by using cell-type specific antibodies. However, the most widely used technique for telomere length measurement is by qPCR, based on a method devised by Cawthon et al. (24). In brief, qPCR technology is used to detect the amount of telomere sequences per sample (T) by using a telomere specific primer and normalizing it with a single copy gene (S) to obtain the average telomere length per cell. The method could be used for relative estimation (T/S ratio) or for absolute telomere length analysis when used with telomere and single copy gene standards (22). The drawbacks of the TRF, flow-FISH, and qPCR based methods is that they provide a mean telomere length of the sample under analysis and not the chromosome specific telomere length. Therefore, to understand telomere length of specific chromosomes with high resolution, the single telomere length amplification (STELA) assay was developed (25). This PCR based method includes ligation of a linker sequence called telorette to the 5’ end of the complementary C-rich strand, followed by amplification of the telomere of a specific strand using telorette and chromosome or allele specific primers. The PCR products are analyzed by Southern blotting or qPCR. In addition to the above methods that were used for telomere length analysis in CLL, newer techniques have been developed for analyzing different aspects of telomere length. The STELA PCR is capable of analyzing critically short telomeres only on a subset of chromosomes such as XpYp that have unique subtelomeric sequences suitable for designing chromosome specific primers. This limitation was overcome by the universal STELA method (U-STELA) (26). The technique involves digesting the DNA using the enzymes MseI and NdeI that do not digest the telomeric repeats, followed by ligating adapters complementary to the overhangs created by these enzymes. The non-telomeric parts of the genome that have these adapters on both the ends form a pan-handle like structure due to complementarity between the ends, suppressing PCR amplification. On the other hand, the telomeic sequences have a digested 5’ end and a 3’ G rich overhang that is not processed by the enzymes. Ligation of telorette to the 3’ overhang allows specific amplification of telomeres of all chromosomes. This method is useful for genome-wide analysis of the distribution of critically short telomeres. The STELA and U-STELA, though highly sensitive, they are biased towards detection of short telomeres (<8kb). The method was further improved and the telomere shortest length assay (TeSLA) was developed (27). In TeSLA, an adapter (TeSLA-T) is first added to the G rich 3’ overhang, followed by the use of restriction enzymes BfaI, CviAII, MseI, NdeI to digest the non-telomeric DNA as well as the non-canonical sub-telomeric DNA and to generate 5’ AT and TA overhangs. The 5’ ends of the digested DNA are then dephosphorylated to prevent re-ligation of the ends. Double stranded DNA adapters with phosphorylated 5’ AT and TA overhangs containing C3 spacers are tagged to the digested ends. Telomeres are then amplified using a primer pair specific for the TeSLA-T and 5’ AT/TA adapters. TeSLA allows high resolution analysis of the distribution of <1 to 18 kb long telomeres. Novel approaches for telomere assessment such as using CRISPR/Cas9 RNA-directed nickase system to specifically label telomeres followed by high throughput imaging using nano channel array have also been developed. This technique permits mapping and analysis of individual telomeres based on subtelomere repeat elements (SRE) and unique sequences in the chromosomes. Recently, another method for telomere length measurement by molecular combing or DNA fiber analysis was reported (28) where cells were embedded in agarose plugs followed by protein digestion to obtain unsheared DNA. The DNA was then solubilized and stretched on cover slips with a constant stretching factor of 2kb/µM. Telomeres were analyzed using a telomere specific PNA probe and the DNA is counterstained to validate the terminal location of the telomeres in the chromosomes. Fluorescence microscopy is used to obtain the distribution of telomere lengths within a sample. The method is reported to be sensitive for estimation of telomere lengths of <1 to >80 kb. In CLL, the dynamics of telomere length distribution in cases with stable and progressive disease is not well defined. The above mentioned novel methods may be valuable in monitoring changes in telomere length landscape within a given case over time and its contribution to clonal diversification, genomic complexity, and disease evolution. Due to the wide range of methods used for telomere length analysis, the comparability of telomere lengths analyzed in different CLL studies are limited. Moreover, while TRF and STELA based methods have greater reproducibility, qPCR and FISH based methods need to be very carefully and extensively optimized to limit batch effects (29). One of the methods to improve the use of telomere length as comparable biomarker would be to have a standardized set of control samples with telomere length estimated by TRF, included in every batch of FISH or qPCR based analyses to detect and normalize for batch variations and to convert the measured relative (T/S ratios or relative fluorescence units) telomere length as absolute (TRF) values in kilo bases (kb). Telomere Length Associations and Prognostic Impact of Telomere Length in Chronic Lymphocytic Leukemia Early studies on telomere length associations in CLL using TRF analysis of relatively small patient cohorts (n = 58 and n = 61) (30, 31) suggested an association of short telomere length with advanced disease stages, presence of the poor prognostic unmutated IGHV and inferior overall survival (30). Subsequent studies using TRF (31, 32), flow-FISH (33), and q-PCR (34) identified associations of short telomere length with other adverse disease features such as CD38 and ZAP70 expression (35) or lymphocyte doubling time (33). Analysis of telomere length associations with genomic aberration subgroups consistently showed significant association of short telomeres with the poor prognostic, deletion 17p (17p-) and deletion 11q (11q-) while long telomere length was found in cases with deletion 13q (13q-) (36–42). Of note, TP53 and ATM which are critical checkpoint genes activated upon telomere shortening and dysfunction are found in the minimally deleted regions of 17p- and 11q-, respectively. Deletion of these genes therefore permits these tumor cell clones to undergo further telomere shortening compared to non-17p-/11q-, without activating cell death pathways. In line with this, short telomere length was found to be associated with the presence of mutations in TP53 (37, 40, 41, 43) and ATM (41, 43, 44). Cases with 17p- or TP53 mutation but long telomere length were found to have mutated IGHV (40, 43). Among the recurrently mutated genes in CLL, SF3B1 was found to be associated with short telomere length across different studies (37, 40, 43, 45). For NOTCH1 mutations, some reports suggested an association (37) while others found no association (40, 43) with telomere length. Additionally, Beta-2 microglobulin (ß2M) and serum thymidine kinase (s-TK) levels were also found to be significantly associated with telomere length in CLL. Overall, the presence of short telomere length was found to be significantly associated with various other poor prognostic clinical and genetic characteristics in CLL which translates into an inferior survival compared to those with longer telomere length. Despite this strong association with other disease features, telomere length was found to be an independent prognostic factor in different patient cohorts (35, 36, 39, 40, 42, 43, 46). Accordingly, telomere length was shown to identify poor or favorable risk patients within established prognostic subgroups defined by e.g. IGHV, 17p- and 11q-. Overall, the findings suggest telomere length to be a very important prognostic factor in CLL that could be instrumental for risk stratification as well as monitoring and early detection of changes in clonality. The prognostic impact of telomere length in CLL has so far been established only in chemo or chemo-immunotherapy based trials and it would be interesting to study the telomere length associations in the context of novel therapy. Telomere Length and Genomic Complexity Critical shortening of telomere length, de-protection at telomeres along with loss of checkpoint genes leads to development of genetic lesions and tumorigenesis (16). In CLL, various studies have analyzed the impact of telomere dysfunction on genomic complexity. Early indicators of telomere dysfunction is the formation of DNA damage foci at the telomeres called telomere dysfunction induced foci (TIF) (47). CLL cells were found to exhibit TIF as detected by the localization of gamma H2AX and 53BP1 at the telomeres. In addition, an increase in abnormalities such as telomere deletion/doublets and terminal duplications were observed in TIF+ CLLs (48). Activation of DNA damage response and DNA repair signaling at the telomeres lead to telomeric fusions. In CLL using STELA method, frequency of telomeric fusion events were found to increase with advancing disease stage and 58% of the Binet C stage had critically eroded telomeres and fusions. Cases having telomeric fusions also showed large scale genomic rearrangements at the telomeric regions (49), reminiscent of genomic complexity due to BFB cycles in telomeres at crisis (16). Subsequently, by analysis of large patient cohort (n = 321), the XpYp telomere length of 2.26 kb was defined as the mean length at which fusions occur (50). Different studies analyzed the correlation of telomere length with genomic complexity, either by conventional FISH or by SNP array analysis. The analyses showed significant association of short telomeres with presence of two or more aberrations (FISH) (36, 38, 51) or with higher number of copy number alterations (CNAs) (37, 40). Of interest, we observed progressive shortening of telomere length with increase in number of copy number variations (CNVs) (40). Additionally, short telomeres in CLL were also found to be associated with increase in uni-parental disomy (UDP) and chromothripsis (52). The strong association of telomere shortening with terminal fusions and genomic complexity highlights the central role played by telomere dysfunction in clonal diversification and disease evolution in CLL. Telomere Length Associations—Cause or Consequence? The associations of short telomeres with various adverse prognostic markers such as unmutated IGHV, and TP53/ATM mutations, 17p-, 11q- could be explained as a direct outcome or “consequence” of increased proliferation of the cells harboring these high risk features (53). This is supported by the fact that telomere length in serially sampled CLL samples show shortening, despite the presence of active telomerase (33, 40). In addition, Röth et al. identified shorter telomere length of naïve and memory T-cells from patients with more aggressive ZAP-70+/CD38+ CLL which may be due to increased proliferation and expansion of T-cells in this CLL subtype (54). These findings show that at least in part, the distribution of telomere length among the different CLL subgroups is a direct consequence of their proliferation capacity (Figure 1). Figure 1 Telomere dysfunction as a consequence: In CLL, the poor risk disease features such as unmutated IGHV, presence of deletion 17p (17p-), deletion 11q (11q-) are shown to be associated with short telomere length while the favorable prognostic subgroups such as mutated IGHV and deletion 13q (13q-) are associated with longer telomeres. It could therefore be considered that telomere length associations are a direct outcome of the proliferation capacity of the different CLL subgroups. On the other hand, telomere length could be considered to play a more active biological role in CLL by being a “cause” for clonal diversification and disease progression. The strong association of telomere length with mutation status of IGHV has been documented across all the studies, owing to differences in the cell of origin. Mutated IGHV CLLs are considered to develop from CD5(+), CD27(+), post-germinal center (GC) B-cell subsets (55), where a robust telomerase activation and elongation of telomere length is known to occur during the GC reaction (56). The non-GC origin of the unmutated IGHV CLL thus may explain the strong association of this subtype with short telomere length. Telomere shortening has been shown to be a tumor suppressive mechanism, where cells with telomere length shorter than a threshold undergo DNA damage checkpoint activation, stalling further telomere shortening and controlling cell proliferation (17). In CLL cells with unmutated IGHV, the presence of short telomere length may exert a strong selection pressure for loss of checkpoint genes such as TP53 or ATM which would eventually allow for further telomere shortening and cell proliferation. This notion is supported by study on temporal association of genomic alterations in CLL, where 17p-/TP53 mutations and 11q-/ATM mutations were found to be later events in CLL pathogenesis (57). Moreover, we observed in a large clinical trial cohort (n = 620) that cases with 17p- and 11q- had the shortest telomere length across the different genomic aberration subgroups and interestingly, these cases had very short telomeres even when these aberrations were observed in only a small fraction of the tumor bulk. The finding suggested that critical telomere shortening in these cases could precede acquisition of these high-risk aberrations (40). High resolution analysis of genomic fusions in cases with dysfunctional telomeres showed complex inter/intra chromosomal and terminal fusions involving the telomere loci in all of the samples analyzed (n = 9). Strikingly, the telomere fusions also included the loci recurrently altered in CLL (58). Therefore, even though telomere shortening and its association with poor prognostic features could be a consequence or outcome of these poor risk characteristics, recent findings indicate a dynamic role of dysfunctional telomeres in shaping the disease course. Critical telomere shortening confers selection pressure to acquire poor-risk variants and increases disease heterogeneity due to genomic fusion events involving dysfunctional telomeres thereby promoting disease progression and treatment resistance in conjunction with clonal evolution (Figure 2). Figure 2 Telomere dysfunction as a cause: CLL with mutated IGHV undergo telomerase activation during the germinal center (GC) reaction leading to telomere elongation. These long telomere length cases follow an indolent disease course and rarely acquire poor-risk features. On the contrary, unmutated IGHV CLL which have poly reactive BCRs undergo progressive telomere shortening with increasing cell proliferation. Critical telomere shortening leads to activation of DNA damage signaling at the telomeres indicated by the presence of telomere dysfunction induced foci (TIF). Persistent DNA damage at the telomeres may lead to selection of clones with dysfunctional checkpoints (e.g. TP53 or ATM loss). The presence of very short telomere length together with absence of checkpoint genes causes telomere fusions, breakage-fusion-bridge (BFB) cycles, eventually leading to heterogeneity and clonal evolution. Thus according to this hypothesis, telomere length which is defined very early in pathogenesis based on cell of origin plays an active role in disease evolution and progression. Telomerase Expression and its Relation to Disease Features Activation of the enzyme telomerase is considered as one of the hallmarks of malignant transformation (6) and is pivotal for sustaining cell proliferation. The predominant mechanism of TERT activation in human cancers is by acquisition of TERT promoter mutations. In contrast, such mutations are rarely reported in CLL. Ten percent of cancers that do not depend on telomerase depend on ALT mechanism (59). However, a study on the presence of C-Circles and extra chromosomal telomeric repeats (ECTR) which are hallmarks of ALT did not reveal the presence of ALT driven telomere maintenance in CLL (60). Telomerase activity and/or expression in CLL has been studied across various cohorts. Initially, higher telomerase activity was found to be associated with advanced disease stages and progressive disease (30, 61). Telomerase activity was found to have an inverse correlation with telomere length (33, 62) and higher telomerase expression was associated with other poor-risk disease features and was described as a prognostic factor in CLL (42, 63, 64). Thus, intriguingly, unmutated IGHV CLLs, despite the absence of GC mediated TERT activation and telomere lengthening, these cases have short telomeres but high telomerase expression and activity (65). This indicates that the high TERT expression in unmutated IGHV CLL is therefore crucial for the maintaining the critical telomere length to ensure cell survival and proliferation. However, in contrast to mutated IGHV CLL, processes underlying the high telomerase expression and activity in unmutated IGHV CLL are not well defined. Tumor Microenvironment and Telomere Dysfunction With the absence of the classical oncogenic promoter TERT mutations in CLL, the mechanisms underlying its activation are poorly understood. Genome wide association studies repeatedly identified TERT as one among the susceptibility loci for risk of CLL (66, 67). Studies to identify SNPs in TERT and TERC associated with CLL identified the minor rs35033501 TERT variant (68), as well as the SNPs rs10936599 in TERC and rs2736100 in TERT (69) and presence of longer telomere length to be associated with CLL. Though shortening of telomere length in CLL is well characterized to be an adverse prognostic factor it should therefore be noted that telomerase activation and telomere lengthening constitute an important phase in malignant transformation. Also, in cases with poor risk features and rapid disease progression, constant lengthening of telomeres by telomerase is the key to sustain cell survival to counteract telomere loss due to proliferation. CLLs with unmutated IGHV are known to have a poly-reactive/auto-reactive BCR in contrast to that of mutated IGHV. Apart from this, the CLL BCRs can also signal through cell-autonomous signaling (70, 71). These findings, along with the clinical success of the BCR signaling inhibitors such as ibrutinib and acalabrutinib (72, 73), highlight the importance of BCR signaling for survival and proliferation of CLL cells. BCR along with activation of co-receptors, drives various downstream mechanisms such as activation of PI3K/AKT, NF-kB (74) and MAPK (75) that dictate proliferation, homing and guide interaction with other cells in the microenvironment. Of importance, Damle et al. showed (76) that stimulation of BCR using multivalent BCR ligand, dextran conjugated anti-μ mAb HB57 (HB57-dex) or bivalent F(ab′)2 goat anti-μ antibody led to an increase in telomerase activity, predominantly in CLLs with unmutated IGHV. This BCR driven activation of TERT was accompanied by an induction of cell proliferation. They also identified that the TERT activation was mediated by PI3K/AKT signaling, as the use of a PI3K inhibitor abrogated the BCR mediated TERT activation. Another study identified higher TERT and TERC expression and activity in SF3B1 mutated CLL, however the underlying mechanism is not well understood (77). The tumor microenvironment mediated signaling are known to contribute to activation of TERT in different cancers. In breast cancer, STAT3 was found to activate telomerase expression by binding to the TERT promotor (78). In CLL, a constitutive activation of JAK2/STAT3 signaling has been reported (79) and it would therefore be interesting to understand its role in the regulation of TERT in CLL. Another factor that may be of interest for driving TERT activation in CLL is hypoxia. HIF-1α plays an important role in interaction of CLL cells and the microenvironment (80). HIF-1α (81) as well as the levels of hypoxia (82) are known to regulate the expression and activity of telomerase and impact telomere length. Similarly, the Wnt/ß-catenin pathway is a direct regulator of TERT (83) which could be of relevance in the context of CLL. Overall, various pathways that are active in CLL are described to play a role in TERT activation and investigations on the relevance of these mechanisms in regulation of telomerase in CLL may therefore have therapeutic relevance. Mutations and Deregulated Expression of Telomere-Related Genes in Chronic Lymphocytic Leukemia Different components of the telomere system are found to be mutated or deregulated in CLL. Among the recurrently mutated genes, POT1 mutations have been reported in about 3.5% of the cases. It is the first telomere structural component known to be mutated in human cancers. POT1 mutations in CLL occur in the OB1 and OB2 domains that alters its binding to the 3’ telomeric tail, leading to de-protection of the ends and genomic instability. In cell line models, loss of POT1 function led to aberrant lengthening of telomeres (84). Thus POT1 mutations were associated with complex karyotype and are independent prognostic factors for overall survival in CLL (85). Whole exome sequencing of 66 familial CLLs revealed the presence of germline deactivating POT1 mutations in four families as well as in the sheltering components adrenocortical dysplasia homolog (ACD, in two families) and telomeric repeat binding factor 2 (TERF2IP, three families) (86). These telomere component mutations are therefore important pre-disposing factors for CLL, highlighting the important role of telomere dysfunction in CLL pathogenesis. In addition, expression analysis of telomere related genes in different CLL cohorts have identified deregulation of various telomere components. One study identified a significant downregulation of Dyskerin, TRF1, hRAP1, POT1, hEST1A, MRE11, RAD50, and KU80 while TPP1 and RPA1 were upregulated compared to normal B-cells (87). Another study reported a downregulation of TIN2 and ACD in a subset of CLLs which correlated with increase in TIF, indicating telomere dysfunction (88). Also, downregulation of the telomere components POT1, TIN2, TPP1, and high TERT were found to be associated with adverse outcome (89). The shelterin components play a very important role by tightly regulating access of telomerase to the telomeres. Though the mechanisms underlying deregulation of the shelterin components in CLL is unknown, it could be presumed that the downregulation of these genes would promote access of TERT to the telomeres, which would be crucial in maintaining the critical telomere length to sustain cell survival. However, this deregulated expression of the shelterin components also result in uncapping of the ends and increase in DNA damage signaling and DNA repair, leading to fusion and genomic complexity. Telomeres and Telomerase Targeted Cancer Therapies Since telomere maintenance is one of the key features of cancers, the telomere system has been considered an attractive target for cancer therapy. Accordingly, therapeutic agents targeting various components of telomeres and the different maintenance mechanisms have been developed and studied across cancers. One of the first inhibitors of telomerase to have progressed to clinical trials is imetelstat. It is a synthetic lipid conjugated 13-mer oligonucleotide that competitively binds to hTR, thereby inhibiting telomerase function (90). In vitro analysis showed that the drug sensitized primary CLL cells to fludarabine (91). Imetelstat is currently being investigated in phase 2 and 3 trials for various solid tumors and hematological malignancies as a single agent or in combination therapies. Small molecule inhibitors of telomerase such as BIBR1532 are currently under pre-clinical evolution (92). Recently, a covalent telomerase inhibitor (NU-1) that targets the catalytic active site of telomerase has been developed (93). The main disadvantage of telomerase inhibitors is the necessity for continuous long term treatment to impede telomere maintenance and critically shorten the telomere length. Moreover, long term treatment with telomerase inhibitors may additionally affect the function of germ cells and stem cells that express telomerase. Another class of molecules that affect telomerase activity include nucleoside analogs such as 6-thio-2’-deoxyguanosine (6dG), didanosine (ddITP), azidothymidine (AZT-TP), and 5-fluro-2’deoxyuridine (5-FdU). These compounds when incorporated at the telomeric ends by telomerase leads to chain termination and uncapping of the telomeric ends (94). Uncapping by nucleoside analogs prevents binding of the shelterin complex, thereby activating DDR. Unlike telomerase inhibitors, treatment with nucleoside analogs leads to rapid induction of cell death irrespective of the telomere length. Similarly, compounds such as telomestatin which are G-quadruplex stabilizers lead to impaired telomere maintenance by telomerase thereby inducing DDR and cell death (95, 96). Though limited clinical progress has been achieved with inhibitors of telomerase, various telomere based immunotherapies are successfully being evaluated in clinical trials for different malignancies. Since telomerase is one of the most commonly expressed tumor associated antigen, different methods are being employed to activate adaptive immune responses against telomerase. TERT peptide vaccines such as INO-1400 (NCT02960594—solid tumors), GV1001 (NCT04032067—Benign Prostatic Hyperplasia), UCPVax (NCT04263051—non-small cell lung cancer), and GX301 (97) are currently being tested in clinical trials for cancer therapy. Of note, DNA vaccine encoding hTERT is being evaluated in a phase 2 study for CLL (NCT03265717). Additionally, adoptive transfer of dendritic cells expressing TERT mRNA (GRNVAC1—NCT00510133) is being studied for the treatment of AML. Another interesting therapeutic approach includes the use of oncolytic adenovirus that replicates under the control of hTERT promoter thereby specifically targeting the tumor cells. The oncolytic adenovirus based therapy telomelysin (OBP-301) is currently being studied for the treatment of a wide range of solid cancers across 6 different clinical trials. In summary, though the direct inhibition of telomerase has shown limited success, hTERT based immunotherapy are rapidly gaining importance for the treatment of a wide range of tumor entities. In CLL, the novel agents such as ibrutinib and venetoclax have achieved tremendous clinical success however, treatment of Richter transformation has still proved to be challenging. Since Richter syndrome is a highly proliferative tumor type, they might have a greater dependency on telomerase than CLL and hence the novel TERT based immunotherapies either as single agents or in combination with checkpoint inhibitors maybe of interest. Conclusion The relation between telomeres and CLL is complex. Though a large amount of effort has been put forward in understanding the prognostic relevance of telomere length and telomerase, various other aspects such as mechanisms underlying telomerase activation and molecular alterations leading to deregulation of telomere maintenance system still needs to be understood. In summary, deregulations of the different components of the telomere system play important roles at specific phases of CLL pathogenesis and progression. A deeper understanding of these mechanisms is vital for the development of therapeutics options for targeting these disease features, especially in patients that turn refractory to novel agents, or as combination treatments to improve efficacy or in the treatment of Richter transformation. Author Contributions The authors BMCJ and SS wrote the manuscript. Both authors contributed to the article and approved the submitted version. Funding This study is supported by Deutsche Forschungsgemeinschaft (DFG) (SFB 1074 projects B1 and B2). Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. References 1. Blackburn EH, Greider CW, Szostak JW. Telomeres and telomerase: the path from maize, Tetrahymena and yeast to human cancer and aging. Nat Med (2006) 12:1133–8. doi: 10.1038/nm1006-1133 2. Wynford-Thomas D, Kipling D. The end-replication problem. Nature (1997) 389:551–1. doi: 10.1038/39210 3. Makarov VL, Hirose Y, Langmore JP. Long G Tails at Both Ends of Human Chromosomes Suggest a C Strand Degradation Mechanism for Telomere Shortening. Cell (1997) 88:657–66. doi: 10.1016/S0092-8674(00)81908-X 4. de Lange T. Shelterin-Mediated Telomere Protection. Annu Rev Genet (2018) 52:223–47. doi: 10.1146/annurev-genet-032918-021921 5. Griffith JD, Comeau L, Rosenfield S, Stansel RM, Bianchi A, Moss H, et al. Mammalian Telomeres End in a Large Duplex Loop. Cell (1999) 97:503–14. doi: 10.1016/S0092-8674(00)80760-6 6. Hanahan D, Weinberg RA. Hallmarks of cancer: the next generation. Cell (2011) 144:646–74. doi: 10.1016/j.cell.2011.02.013 7. Lundblad V, Blackburn EH. An alternative pathway for yeast telomere maintenance rescues est1- senescence. Cell (1993) 73:347–60. doi: 10.1016/0092-8674(93)90234-h 8. Bryan TM, Englezou A, Dalla-Pozza L, Dunham MA, Reddel RR. Evidence for an alternative mechanism for maintaining telomere length in human tumors and tumor-derived cell lines. Nat Med (1997) 3:1271–4. doi: 10.1038/nm1197-1271 9. Karlseder J, Broccoli D, Dai Y, Hardy S, de Lange T. p53- and ATM-dependent apoptosis induced by telomeres lacking TRF2. Science (1999) 283:1321–25. doi: 10.1126/SCIENCE.283.5406.1321 10. van Steensel B, Smogorzewska A, de Lange T. TRF2 protects human telomeres from end-to-end fusions. Cell (1998) 92:401–13. doi: 10.1016/S0092-8674(00)80932-0 11. Smogorzewska A, Karlseder J, Holtgreve-Grez H, Jauch A, de Lange T. DNA ligase IV-dependent NHEJ of deprotected mammalian telomeres in G1 and G2. Curr Biol (2002) 12:1635–44. doi: 10.1016/S0960-9822(02)01179-X 12. Denchi EL, de Lange T. Protection of telomeres through independent control of ATM and ATR by TRF2 and POT1. Nature (2007) 448:1068–71. doi: 10.1038/nature06065 13. Rudolph KL, Chang S, Lee HW, Blasco M, Gottlieb GJ, Greider C, et al. Longevity, stress response, and cancer in aging telomerase-deficient mice. Cell (1999) 96:701–12. doi: 10.1016/S0092-8674(00)80580-2 14. Gisselsson D, Pettersson L, Höglund M, Heidenblad M, Gorunova L, Wiegant J, et al. Chromosomal breakage-fusion-bridge events cause genetic intratumor heterogeneity. Proc Natl Acad Sci U.S.A. (2000) 97:5357–62. doi: 10.1073/pnas.090013497 15. Gisselsson D, Jonson T, Petersén A, Strömbeck B, Dal Cin P, Höglund M, et al. Telomere dysfunction triggers extensive DNA fragmentation and evolution of complex chromosome abnormalities in human malignant tumors. Proc Natl Acad Sci U.S.A. (2001) 98:12683–8. doi: 10.1073/pnas.211357798 16. Artandi SE, Chang S, Lee SL, Alson S, Gottlieb GJ, Chin L, et al. Telomere dysfunction promotes non-reciprocal translocations and epithelial cancers in mice. Nature (2000) 406:641–5. doi: 10.1038/35020592 17. Hemann MT, Strong MA, Hao LY, CW G. The shortest telomere, not average telomere length, is critical for cell viability and chromosome stability. Cell (2001) 107:67–77. doi: 10.1016/S0092-8674(01)00504-9 18. Barthel FP, Wei W, Tang M, Martinez-Ledesma E, Hu X, Amin SB, et al. Systematic analysis of telomere length and somatic alterations in 31 cancer types. Nat Genet (2017) 49:349–57. doi: 10.1038/ng.3781 19. Allshire RC, Dempster M, Hastie ND. Human telomeres contain at least three types of G-rich repeat distributed non-randomly. Nucleic Acids Res (1989) 17:4611–27. doi: 10.1093/nar/17.12.4611 20. Kimura M, Stone RC, Hunt SC, Skurnick J, Lu X, Cao X, et al. Measurement of telomere length by the Southern blot analysis of terminal restriction fragment lengths. Nat Protoc (2010) 5:1596–607. doi: 10.1038/nprot.2010.124 21. Lai T-P, Wright WE, Shay JW. Comparison of telomere length measurement methods. Philos Trans R Soc B Biol Sci (2018) 373:20160451. doi: 10.1098/rstb.2016.0451 22. O’Callaghan N, Dhillon V, Thomas P, Fenech M. A quantitative real-time PCR method for absolute telomere length. Biotechniques (2008) 44:807–9. doi: 10.2144/000112761 23. Hultdin M, Grönlund E, Norrback K, Eriksson-Lindström E, Just T, Roos G. Telomere analysis by fluorescence in situ hybridization and flow cytometry. Nucleic Acids Res (1998) 26:3651–6. doi: 10.1093/nar/26.16.3651 24. Cawthon RM. Telomere measurement by quantitative PCR. Nucleic Acids Res (2002) 30:e47. doi: 10.1093/nar/30.10.e47 25. Baird DM, Rowson J, Wynford-Thomas D, Kipling D. Extensive allelic variation and ultrashort telomeres in senescent human cells. Nat Genet (2003) 33:203–7. doi: 10.1038/ng1084 26. Bendix L, Horn PB, Jensen UB, Rubelj I, Kolvraa S. The load of short telomeres, estimated by a new method, Universal STELA, correlates with number of senescent cells. Aging Cell (2010) 9:383–97. doi: 10.1111/j.1474-9726.2010.00568.x 27. Lai TP, Zhang N, Noh J, Mender I, Tedone E, Huang E, et al. A method for measuring the distribution of the shortest telomeres in cells and tissues. Nat Commun (2017) 8:1356. doi: 10.1038/s41467-017-01291-z 28. Kahl VFS, Allen JAM, Nelson CB, Sobinoff AP, Lee M, Kilo T, et al. Telomere Length Measurement by Molecular Combing. Front Cell Dev Biol (2020) 8:493. doi: 10.3389/fcell.2020.00493 29. Montpetit AJ, Alhareeri AA, Montpetit M, Starkweather AR, Elmore LW, Filler K, et al. Telomere length: a review of methods for measurement. Nurs Res (2014) 63:289–99. doi: 10.1097/NNR.0000000000000037 30. Bechter OE, Eisterer W, Pall G, Hilbe W, Kühr T, Thaler J. Telomere length and telomerase activity predict survival in patients with B cell chronic lymphocytic leukemia. Cancer Res (1998) 58:4918–22. 31. Hultdin M, Rosenquist R, Thunberg U, Tobin G, Norrback K-F, Johnson A, et al. Association between telomere length and V(H) gene mutation status in chronic lymphocytic leukaemia: clinical and biological implications. Br J Cancer (2003) 88:593–8. doi: 10.1038/sj.bjc.6600763 32. Ricca I, Rocci A, Drandi D, Francese R, Compagno M, Lobetti Bodoni C, et al. Telomere length identifies two different prognostic subgroups among VH-unmutated B-cell chronic lymphocytic leukemia patients. Leukemia (2007) 21:697–705. doi: 10.1038/sj.leu.2404544 33. Damle RN, Batliwalla FM, Ghiotto F, Valetto A, Albesiano E, Sison C, et al. Telomere length and telomerase activity delineate distinctive replicative features of the B-CLL subgroups defined by immunoglobulin V gene mutations. Blood (2004) 103:375–82. doi: 10.1182/blood-2003-04-1345 34. Grabowski P, Hultdin M, Karlsson K, Tobin G, Aleskog A, Thunberg U, et al. Telomere length as a prognostic parameter in chronic lymphocytic leukemia with special reference to VH gene mutation status. Blood (2005) 105:4807–12. doi: 10.1182/blood-2004-11-4394 35. Sellmann L, de Beer D, Bartels M, Opalka B, Nückel H, Dührsen U, et al. Telomeres and prognosis in patients with chronic lymphocytic leukaemia. Int J Hematol (2011) 93:74–82. doi: 10.1007/s12185-010-0750-2 36. Roos G, Kröber A, Grabowski P, Kienle D, Bühler A, Döhner H, et al. Short telomeres are associated with genetic complexity, high-risk genomic aberrations, and short survival in chronic lymphocytic leukemia. Blood (2008) 111:2246–52. doi: 10.1182/blood-2007-05-092759 37. Mansouri L, Grabowski P, Degerman S, Svenson U, Gunnarsson R, Cahill N, et al. Short telomere length is associated with NOTCH1/SF3B1/TP53 aberrations and poor outcome in newly diagnosed chronic lymphocytic leukemia patients. Am J Hematol (2013) 88:647–51. doi: 10.1002/ajh.23466 38. Dos Santos P, Panero J, Palau Nagore V, Stanganelli C, Bezares RF, Slavutsky I. Telomere shortening associated with increased genomic complexity in chronic lymphocytic leukemia. Tumour Biol (2015) 36:8317–24. doi: 10.1007/s13277-015-3556-2 39. Norris K, Hillmen P, Rawstron A, Hills R, Baird DM, Fegan CD, et al. Telomere length predicts for outcome to FCR chemotherapy in CLL. Leukemia (2019) 33:1953–63. doi: 10.1038/s41375-019-0389-9 40. Jebaraj BMC, Tausch E, Landau DA, Bahlo J, Robrecht S, Taylor-Weiner AN, et al. Short telomeres are associated with inferior outcome, genomic complexity, and clonal evolution in chronic lymphocytic leukemia. Leukemia (2019) 33:2183–94. doi: 10.1038/s41375-019-0446-4 41. Song DY, Kim J-A, Jeong D, Yun J, Kim S-M, Lim K, et al. Telomere length and its correlation with gene mutations in chronic lymphocytic leukemia in a Korean population. PloS One (2019) 14:e0220177. doi: 10.1371/journal.pone.0220177 42. Rampazzo E, Bonaldi L, Trentin L, Visco C, Keppel S, Giunco S, et al. Telomere length and telomerase levels delineate subgroups of B-cell chronic lymphocytic leukemia with different biological characteristics and clinical outcomes. Haematologica (2012) 97:56–63. doi: 10.3324/haematol.2011.049874 43. Strefford JC, Kadalayil L, Forster J, Rose-Zerilli MJJ, Parker A, Lin TT, et al. Telomere length predicts progression and overall survival in chronic lymphocytic leukemia: data from the UK LRF CLL4 trial. Leukemia (2015) 29:2411–4. doi: 10.1038/leu.2015.217 44. Britt-Compton B, Lin TT, Ahmed G, Weston V, Jones RE, Fegan C, et al. Extreme telomere erosion in ATM-mutated and 11q-deleted CLL patients is independent of disease stage. Leukemia (2012) 26:826–30. doi: 10.1038/leu.2011.281 45. Edelmann J, Holzmann K, Tausch E, Saunderson EA, Jebaraj BMC, Steinbrecher D, et al. Genomic alterations in high-risk chronic lymphocytic leukemia frequently affect cell cycle key regulators and NOTCH1 regulated transcription. Haematologica (2020) 105:1379–90. doi: 10.3324/haematol.2019.217307. haematol.2019.217307. 46. Rossi D, Lobetti Bodoni C, Genuardi E, Monitillo L, Drandi D, Cerri M, et al. Telomere length is an independent predictor of survival, treatment requirement and Richter’s syndrome transformation in chronic lymphocytic leukemia. Leukemia (2009) 23:1062–72. doi: 10.1038/leu.2008.399 47. Takai H, Smogorzewska A, de Lange T. DNA damage foci at dysfunctional telomeres. Curr Biol (2003) 13:1549–56. doi: 10.1016/s0960-9822(03)00542-6 48. Brugat T, Nguyen-Khac F, Grelier A, Merle-Béral H, Delic J. Telomere dysfunction-induced foci arise with the onset of telomeric deletions and complex chromosomal aberrations in resistant chronic lymphocytic leukemia cells. Blood (2010) 116:239–49. doi: 10.1182/blood-2009-12-257618 49. Lin TT, Letsolo BT, Jones RE, Rowson J, Pratt G, Hewamana S, et al. Telomere dysfunction and fusion during the progression of chronic lymphocytic leukemia: evidence for a telomere crisis. Blood (2010) 116:1899–907. doi: 10.1182/blood-2010-02-272104 50. Lin TT, Norris K, Heppel NH, Pratt G, Allan JM, Allsup DJ, et al. Telomere dysfunction accurately predicts clinical outcome in chronic lymphocytic leukaemia, even in patients with early stage disease. Br J Haematol (2014) 167:214–23. doi: 10.1111/bjh.13023 51. Thomay K, Fedder C, Hofmann W, Kreipe H, Stadler M, Titgemeyer J, et al. Telomere shortening, TP53 mutations and deletions in chronic lymphocytic leukemia result in increased chromosomal instability and breakpoint clustering in heterochromatic regions. Ann Hematol (2017) 96:1493–500. doi: 10.1007/s00277-017-3055-1 52. Ernst A, Jones DTW, Maass KK, Rode A, Deeg KI, Jebaraj BMC, et al. Telomere dysfunction and chromothripsis. Int J Cancer (2016) 138:2905–14. doi: 10.1002/ijc.30033 53. Herndon TM, Chen S-S, Saba NS, Valdez J, Emson C, Gatmaitan M, et al. Direct in vivo evidence for increased proliferation of CLL cells in lymph nodes compared to bone marrow and peripheral blood. Leukemia (2017) 31:1340–7. doi: 10.1038/leu.2017.11 54. Röth A, de Beer D, Nückel H, Sellmann L, Dührsen U, Dürig J, et al. Significantly shorter telomeres in T-cells of patients with ZAP-70 + /CD38 + chronic lymphocytic leukaemia. Br J Haematol (2008) 143:383–6. doi: 10.1111/j.1365-2141.2008.07363.x 55. Seifert M, Sellmann L, Bloehdorn J, Wein F, Stilgenbauer S, Dürig J, et al. Cellular origin and pathophysiology of chronic lymphocytic leukemia. J Exp Med (2012) 209:2183–98. doi: 10.1084/jem.20120833 56. Weng NP, Granger L, Hodes RJ. Telomere lengthening and telomerase activation during human B cell differentiation. Proc Natl Acad Sci U S A (1997) 94:10827–32. doi: 10.1073/pnas.94.20.10827 57. Landau DA, Tausch E, Taylor-Weiner AN, Stewart C, Reiter JG, Bahlo J, et al. Mutations driving CLL and their evolution in progression and relapse. Nature (2015) 526:525–30. doi: 10.1038/nature15395 58. Escudero L, Cleal K, Ashelford K, Fegan C, Pepper C, Liddiard K, et al. Telomere fusions associate with coding sequence and copy number alterations in CLL. Leukemia (2019) 33:2093–7. doi: 10.1038/s41375-019-0423-y 59. Dilley RL, Greenberg RA. ALTernative Telomere Maintenance and Cancer. Trends Cancer (2015) 1:145–56. doi: 10.1016/j.trecan.2015.07.007 60. Medves S, Auchter M, Chambeau L, Gazzo S, Poncet D, Grangier B, et al. A high rate of telomeric sister chromatid exchange occurs in chronic lymphocytic leukaemia B-cells. Br J Haematol (2016) 174:57–70. doi: 10.1111/bjh.13995 61. Trentin L, Ballon G, Ometto L, Perin A, Basso U, Chieco-Bianchi L, et al. Telomerase activity in chronic lymphoproliferative disorders of B-cell lineage. Br J Haematol (1999) 106:662–8. doi: 10.1046/j.1365-2141.1999.01620.x 62. Brezinova J, Berkova A, Vcelikova S, Zemanova Z, Izakova S, Sarova I, et al. Telomere length, molecular cytogenetic findings, and immunophenotypic features in previously untreated patients with B-chronic lymphocytic leukemia. Neoplasma (2010) 57:215–21. doi: 10.4149/neo_2010_03_215 63. Terrin L, Trentin L, Degan M, Corradini I, Bertorelle R, Carli P, et al. Telomerase expression in B-cell chronic lymphocytic leukemia predicts survival and delineates subgroups of patients with the same igVH mutation status and different outcome. Leukemia (2007) 21:965–72. doi: 10.1038/sj.leu.2404607 64. Tchirkov A, Chaleteix C, Magnac C, Vasconcelos Y, Davi F, Michel A, et al. hTERT expression and prognosis in B-chronic lymphocytic leukemia. Ann Oncol Off J Eur Soc Med Oncol (2004) 15:1476–80. doi: 10.1093/annonc/mdh389 65. Palma M, Parker A, Hojjat-Farsangi M, Forster J, Kokhaei P, Hansson L, et al. Telomere length and expression of human telomerase reverse transcriptase splice variants in chronic lymphocytic leukemia. Exp Hematol (2013) 41:615–26. doi: 10.1016/j.exphem.2013.03.008 66. Berndt SI, Skibola CF, Joseph V, Camp NJ, Nieters A, Wang Z, et al. Genome-wide association study identifies multiple risk loci for chronic lymphocytic leukemia. Nat Genet (2013) 45:868–76. doi: 10.1038/ng.2652 67. Speedy HE, Di Bernardo MC, Sava GP, Dyer MJS, Holroyd A, Wang Y, et al. A genome-wide association study identifies multiple susceptibility loci for chronic lymphocytic leukemia. Nat Genet (2014) 46:56–60. doi: 10.1038/ng.2843 68. Wysoczanska B, Dratwa M, Gebura K, Mizgala J, Mazur G, Wrobel T, et al. Variability within the human TERT gene, telomere length and predisposition to chronic lymphocytic leukemia. Onco Targets Ther (2019) 12:4309–20. doi: 10.2147/OTT.S198313 69. Ojha J, Codd V, Nelson CP, Samani NJ, Smirnov IV, Madsen NR, et al. Genetic Variation Associated with Longer Telomere Length Increases Risk of Chronic Lymphocytic Leukemia. Cancer Epidemiol Biomarkers Prev (2016) 25:1043–9. doi: 10.1158/1055-9965.EPI-15-1329 70. Minden MD, Übelhart R, Schneider D, Wossning T, Bach MP, Buchner M, et al. Chronic lymphocytic leukaemia is driven by antigen-independent cell-autonomous signalling. Nature (2012) 489:309–12. doi: 10.1038/nature11309 71. Packham G, Krysov S, Allen A, Savelyeva N, Steele AJ, Forconi F, et al. The outcome of B-cell receptor signaling in chronic lymphocytic leukemia: proliferation or anergy. Haematologica (2014) 99:1138–48. doi: 10.3324/haematol.2013.098384 72. Byrd JC, Furman RR, Coutre SE, Flinn IW, Burger JA, Blum KA, et al. Targeting BTK with ibrutinib in relapsed chronic lymphocytic leukemia. N Engl J Med (2013) 369:32–42. doi: 10.1056/NEJMoa1215637 73. Sharman JP, Egyed M, Jurczak W, Skarbnik A, Pagel JM, Flinn IW, et al. Acalabrutinib with or without obinutuzumab versus chlorambucil and obinutuzmab for treatment-naive chronic lymphocytic leukaemia (ELEVATE TN): a randomised, controlled, phase 3 trial. Lancet (London England) (2020) 395:1278–91. doi: 10.1016/S0140-6736(20)30262-2 74. Pontoriero M, Fiume G, Vecchio E, de Laurentiis A, Albano F, Iaccino E, et al. Activation of NF-κB in B cell receptor signaling through Bruton’s tyrosine kinase-dependent phosphorylation of IκB-α. J Mol Med (2019) 97:675–90. doi: 10.1007/s00109-019-01777-x 75. Shukla A, Shukla V, Joshi SS. Regulation of MAPK signaling and implications in chronic lymphocytic leukemia. Leuk Lymphoma (2018) 59:1565–73. doi: 10.1080/10428194.2017.1370548 76. Damle RN, Temburni S, Banapour T, Paul S, Mongini PKA, Allen SL, et al. T-cell independent, B-cell receptor-mediated induction of telomerase activity differs among IGHV mutation-based subgroups of chronic lymphocytic leukemia patients. Blood (2012) 120:2438–49. doi: 10.1182/blood-2012-02-409110 77. Wang L, Brooks AN, Fan J, Wan Y, Gambe R, Li S, et al. Transcriptomic Characterization of SF3B1 Mutation Reveals Its Pleiotropic Effects in Chronic Lymphocytic Leukemia. Cancer Cell (2016) 30:750–63. doi: 10.1016/J.CCELL.2016.10.005 78. Chung SS, Aroh C, Vadgama JV. Constitutive activation of STAT3 signaling regulates hTERT and promotes stem cell-like traits in human breast cancer cells. PloS One (2013) 8:e83971. doi: 10.1371/journal.pone.0083971 79. Severin F, Frezzato F, Visentin A, Martini V, Trimarco V, Carraro S, et al. In Chronic Lymphocytic Leukemia the JAK2/STAT3 Pathway Is Constitutively Activated and Its Inhibition Leads to CLL Cell Death Unaffected by the Protective Bone Marrow Microenvironment. Cancers (Basel) (2019) 11:1939. doi: 10.3390/cancers11121939 80. Valsecchi R, Coltella N, Belloni D, Ponente M, Ten Hacken E, Scielzo C, et al. HIF-1α regulates the interaction of chronic lymphocytic leukemia cells with the tumor microenvironment. Blood (2016) 127:1987–97. doi: 10.1182/blood-2015-07-657056 81. Nishi H, Nakada T, Kyo S, Inoue M, Shay JW, Isaka K. Hypoxia-inducible factor 1 mediates upregulation of telomerase (hTERT). Mol Cell Biol (2004) 24:6076–83. doi: 10.1128/MCB.24.13.6076-6083.2004 82. Guan J-Z, Guan W-P, Maeda T, Makino N. Different levels of hypoxia regulate telomere length and telomerase activity. Aging Clin Exp Res (2012) 24:213–7. doi: 10.1007/BF03325250 83. Zhang Y, Toh L, Lau P, Wang X. Human Telomerase Reverse Transcriptase (hTERT ) Is a Novel Target of the Wnt/β-Catenin Pathway in Human Cancer. J Biol Chem (2012) 287:32494–511. doi: 10.1074/jbc.M112.368282 84. Ramsay AJ, Quesada V, Foronda M, Conde L, Martínez-Trillos A, Villamor N, et al. POT1 mutations cause telomere dysfunction in chronic lymphocytic leukemia. Nat Genet (2013) 45:526–30. doi: 10.1038/ng.2584 85. Herling CD, Klaumünzer M, Rocha CK, Altmüller J, Thiele H, Bahlo J, et al. Complex karyotypes and KRAS and POT1 mutations impact outcome in CLL after chlorambucil-based chemotherapy or chemoimmunotherapy. Blood (2016) 128:395–404. doi: 10.1182/blood-2016-01-691550 86. Speedy HE, Kinnersley B, Chubb D, Broderick P, Law PJ, Litchfield K, et al. Germ line mutations in shelterin complex genes are associated with familial chronic lymphocytic leukemia. Blood (2016) 128:2319–26. doi: 10.1182/blood-2016-01-695692 87. Poncet D, Belleville A, t’kint de Roodenbeke C, Roborel de Climens A, Ben Simon E, Merle-Beral H, et al. Changes in the expression of telomere maintenance genes suggest global telomere dysfunction in B-chronic lymphocytic leukemia. Blood (2008) 111:2388–91. doi: 10.1182/blood-2007-09-111245 88. Augereau A, T’kint de Roodenbeke C, Simonet T, Bauwens S, Horard B, Callanan M, et al. Telomeric damage in early stage of chronic lymphocytic leukemia correlates with shelterin dysregulation. Blood (2011) 118:1316–22. doi: 10.1182/blood-2010-07-295774 89. Guièze R, Pages M, Véronèse L, Combes P, Lemal R, Gay-bellile M, et al. Telomere status in chronic lymphocytic leukemia with TP53 disruption. Oncotarget (2016) 7:56976–85. doi: 10.18632/oncotarget.10927 90. Asai A, Oshima Y, Yamamoto Y, Uochi T, Kusaka H, Akinaga S, et al. A novel telomerase template antagonist (GRN163) as a potential anticancer agent. Cancer Res (2003) 63(14):3931–9. 91. Shawi M, Chu TW, Martinez-Marignac V, Yu Y, Gryaznov SM, Johnston JB, et al. Telomerase Contributes to Fludarabine Resistance in Primary Human Leukemic Lymphocytes. PloS One (2013) 8:e70428. doi: 10.1371/journal.pone.0070428 92. Damm K, Hemmann U, Garin-Chesa P, Hauel N, Kauffmann I, Priepke H, et al. A highly selective telomerase inhibitor limiting human cancer cell proliferation. EMBO J (2001) 20:6958–68. doi: 10.1093/emboj/20.24.6958 93. Betori RC, Liu Y, Mishra RK, Cohen SB, Kron SJ, Scheidt KA. Targeted Covalent Inhibition of Telomerase. ACS Chem Biol (2020) 15:706–17. doi: 10.1021/acschembio.9b00945 94. Sanford SL, Welfer GA, Freudenthal BD, Opresko PL. Mechanisms of telomerase inhibition by oxidized and therapeutic dNTPs. Nat Commun (2020) 11:5288. doi: 10.1038/s41467-020-19115-y 95. Shin-ya K, Wierzba K, Matsuo K, Ohtani T, Yamada Y, Furihata K, et al. Telomestatin, a Novel Telomerase Inhibitor from Streptomyces anulatus. J Am Chem Soc (2001) 123:1262–3. doi: 10.1021/ja005780q 96. Tauchi T, Shin-ya K, Sashida G, Sumi M, Okabe S, Ohyashiki JH, et al. Telomerase inhibition with a novel G-quadruplex-interactive agent, telomestatin: in vitro and in vivo studies in acute leukemia. Oncogene (2006) 25:5719–25. doi: 10.1038/sj.onc.1209577 97. Fenoglio D, Parodi A, Lavieri R, Kalli F, Ferrera F, Tagliamacco A, et al. Immunogenicity of GX301 cancer vaccine: Four (telomerase peptides) are better than one. Hum Vaccin Immunother (2015) 11:838–50. doi: 10.1080/21645515.2015.1012032 Keywords: chronic lymphocytic leukemia, telomere dysfunction, telomerase activation, genomic complexity, prognostic factor, clonal evolution Citation: Jebaraj BMC and Stilgenbauer S (2021) Telomere Dysfunction in Chronic Lymphocytic Leukemia. Front. Oncol. 10:612665. doi: 10.3389/fonc.2020.612665 Received: 30 September 2020; Accepted: 30 November 2020; Published: 15 January 2021. Edited by:Etienne Moussay, Luxembourg Institute of Health, Luxembourg Reviewed by:Chris Pepper, Brighton and Sussex Medical School, United Kingdom Guru Prasad Maiti, Oklahoma Medical Research Foundation, United States Copyright © 2021 Jebaraj and Stilgenbauer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
https://www.frontiersin.org/articles/10.3389/fonc.2020.612665/full
6.12: Telomeres and Telomerase In eukaryotic DNA replication, a single-stranded DNA fragment remains at the end of a chromosome after the removal of the final primer. This section of DNA cannot be replicated in the same manner as the rest of the strand because there is no 3’ end to which the newly synthesized DNA can attach. This non-replicated fragment results in gradual loss of the chromosomal DNA during each cell duplication. Additionally, it can induce a DNA damage response by enzymes that recognize single-stranded DNA. To avoid this, a buffer zone composed of a repeating nucleotide sequence and a protein complex, called a telomere is present at the ends of the chromosomes which protects the ends of the chromosomes. Telomerase, a ribonucleoprotein enzyme composed of both RNA and proteins, can synthesize and elongate the lost DNA. Telomerase RNA component (TERC) contains a template nucleotide sequence for the synthesis of the telomeric repeats. The TERC length and sequence vary between organisms In ciliates, it is around 150 nucleotides long, whereas, in yeast, it is approximately 1150 nucleotides. The protein component, telomerase reverse transcriptase (TERT), synthesizes short telomere repeats using the template strand present in the TERC. In mammals, the telomere is protected by shelterin which is a complex of six different proteins: telomeric repeat binding factor 1 (TRF1), telomeric repeat binding factor 2 (TRF2), protection of telomere 1 (POT1), TRF1 interacting nuclear factor 2 (TIN2), TIN2-POT1 organizing protein (TPP1) and repressor/activator protein 1 (RAP1). Proteins present in the shelterin complex are involved in important functions such as telomerase recruitment, regulation of telomere length, and providing binding sites for accessory proteins. Telomerase expression can increase the lifespan of a cell and allow it to proliferate continuously, a characteristic feature of a cancer cell. Telomerase activity has been observed in almost 90% of cancer cells which makes them a target of current research for new cancer treatments.
https://www.jove.com/science-education/11555/telomeres-and-telomerase
Warning: more... Generate a file for use with external citation management software. Telomeres are linear guanine-rich DNA structures at the ends of chromosomes. The length of telomeric DNA is actively regulated by a number of mechanisms in highly proliferative cells such as germ cells, cancer cells, and pluripotent stem cells. Telomeric DNA is synthesized by way of the ribonucleoprotein called telomerase containing a reverse transcriptase (TERT) subunit and RNA component (TERC). TERT is highly conserved across species and ubiquitously present in their respective pluripotent cells. Recent studies have uncovered intricate associations between telomeres and the self-renewal and differentiation properties of pluripotent stem cells. Interestingly, the past decade's work indicates that the TERT subunit also has the capacity to modulate mitochondrial function, to remodel chromatin structure, and to participate in key signaling pathways such as the Wnt/β-catenin pathway. Many of these non-canonical functions do not require TERT's catalytic activity, which hints at possible functions for the extensive number of alternatively spliced TERT isoforms that are highly expressed in pluripotent stem cells. In this review, some of the established and potential routes of pluripotency induction and maintenance are highlighted from the perspectives of telomere maintenance, known TERT isoform functions and their complex regulation. Alternative splicing; TERT; embryonic stem cells; hESC; iPSC; pluripotency; telomerase; telomere National Center for Biotechnology Information,
https://www.ncbi.nlm.nih.gov/pubmed/26786236
Telomerase is an enzyme specialized in maintaining telomere lengths in highly proliferative cells. Loss-of-function mutations cause critical telomere shortening and are associated with the bone marrow failure syndromes dyskeratosis congenita and aplastic anemia and with idiopathic pulmonary fibrosis. Here, we sought to determine the spectrum of clinical manifestations associated with telomerase loss-of-function mutations. Methodology/Principal Findings Sixty-nine individuals from five unrelated families with a variety of hematologic, hepatic, and autoimmune disorders were screened for telomerase complex gene mutations; leukocyte telomere length was measured by flow fluorescence in situ hybridization in mutation carriers and some non-carriers; the effects of the identified mutations on telomerase activity were determined; and genetic and clinical data were correlated. In six generations of a large family, a loss-of-function mutation in the telomerase enzyme gene TERT associated with severe telomere shortening and a range of hematologic manifestations, from macrocytosis to acute myeloid leukemia, with severe liver diseases marked by fibrosis and inflammation, and one case of idiopathic pulmonary fibrosis but not with autoimmune disorders. Additionally, we identified four unrelated families in which loss-of-function TERC or TERT gene mutations tracked with marrow failure, pulmonary fibrosis, and a spectrum of liver disorders. Conclusions/Significance These results indicate that heterozygous telomerase loss-of-function mutations associate with but are not determinant of a large spectrum of hematologic and liver abnormalities, with the latter sometimes occurring in the absence of marrow failure. Our findings, along with the link between pulmonary fibrosis and telomerase mutations, also suggest a common pathogenic mechanism for fibrotic diseases in which defective telomere repair plays important role. Citation: Calado RT, Regal JA, Kleiner DE, Schrump DS, Peterson NR, Pons V, et al. (2009) A Spectrum of Severe Familial Liver Disorders Associate with Telomerase Mutations. PLoS ONE 4(11): e7926. https://doi.org/10.1371/journal.pone.0007926 Editor: Robyn Klein, Washington University School of Medicine, United States of America Received: August 7, 2009; Accepted: October 26, 2009; Published: November 20, 2009 This is an open-access article distributed under the terms of the Creative Commons Public Domain declaration which stipulates that, once placed in the public domain, this work may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. Funding: This research was supported in part by the NIH Intramural Research Program (National Heart, Lung, and Blood Institute and National Cancer Institute). Work in the Lansdorp laboratory is supported by grants from the Canadian Institutes of Health Research (MOP38075 and GMH79042) and the National Cancer Institute of Canada (with support from the Terry Fox Run). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: Dr. Lansdorp reports being a founding shareholder in Repeat Diagnostics, a company that specializes in length measurement of leukocyte telomeres with the use of flow-FISH. Introduction Telomeres consist of tandem TTAGGG repeats and associated proteins located at the ends of chromosomes that serve to prevent recombination, end-to-end fusion, and activation of DNA damage responses . Telomere attrition occurs with each cell division as a result of DNA polymerase's inability to replicate the extreme 3′-end of template strands , . Progressive telomere shortening signals proliferation arrest and cellular senescence via p53, p21, and PMS2 . In order to maintain proliferative capacity without compromising chromosome stability, embryonic and adult stem cells and certain somatic cells counter telomeric attrition by telomerase-catalyzed addition of TTAGGG repeats to the 3′ telomeric overhangs . Constitutional loss-of-function telomerase mutations result in rapid telomere shortening and premature cellular senescence in proliferative somatic tissues . Defective telomere repair has been causally associated with several human diseases. Genetic linkage analysis of the constitutional marrow failure syndrome dyskeratosis congenita led to the discovery of mutations in the genes DKC1 (which encodes dyskerin) and telomerase RNA component (TERC) . We reported mutations in TERC and telomerase reverse transcriptase (TERT) to be risk factors for apparently acquired aplastic anemia, a marrow failure disease occurring in patients who lack the typical physical anomalies and family history of dyskeratosis congenita. Mutations in telomerase complex genes also appear in families with idiopathic pulmonary fibrosis –, and pulmonary disease, esophageal stricture, malignancy, and liver disease also have been reported in twenty, seventeen, eight, and seven percent of dyskeratosis congenita patients, respectively . Prior investigation of the clinical manifestations related to telomerase deficiency has been restricted to the screening of patient cohorts with specific diagnoses (bone marrow failure, pulmonary fibrosis). Using a different approach, we studied the family members of five unrelated patients with marrow failure and telomerase mutations; we genetically screened relatives who had a variety of hematologic, hepatic, and autoimmune disorders. We found that mutations were associated with a wide range of hematologic abnormalities, from macrocytosis to acute myeloid leukemia, and with severe liver diseases characterized by fibrosis, inflammation, and regeneration occurring independently of marrow failure or other affected organs. Idiopathic pulmonary fibrosis was diagnosed in one mutation-carrier individual. No other investigated illnesses tracked with mutational status. Results Family A The proband (Subject A-V-23; Fig. 1A) presented at age twenty-five with a ten-year history of progressive pancytopenia (Fig. 2A). We have previously found that he was heterozygous for a loss-of-function TERT K570N mutation . Eighteen members of his immediate family (Fig. 1A, inset) were previously screened and eight tested positive for the mutation . A long history of hematologic diseases was well known in the family back to the patient's paternal great-great-grandmother, Subject A-I-2, who died of an apparent blood disorder at the age of sixty-five. However, the great-grandmother and the grandfather, Subjects A-II-7 and A-III-16 (the latter previously found to be positive for the mutation ), had not manifested hematologic symptoms. The proband's father (Subject A-IV-29) had thrombocytopenia and mild anemia from childhood. By age thirty-three, he developed myelodysplasia, which rapidly progressed to acute myeloid leukemia and death subsequent to an induction cycle of chemotherapy (Fig. 2B). The father was an obligatory carrier, as three of his sisters and his father also tested positive and his wife (the proband's mother) tested normal. Another of the proband's heterozygous paternal aunts (Subject A-IV-26) and the index patient's two heterozygous sisters, Subjects A-V-19 and A-V-20, ages forty-seven, twenty-two, and nineteen, respectively, only have macrocytosis, whereas his two wild-type sisters (Subjects A-V-21 and A-V-22) are healthy and without hematologic abnormalities. The patient's eldest of three sons, now six years old, carries the mutation, but he is asymptomatic and has normal blood counts. (A) The TERT K570N mutation tracked with hematological disorders and severe liver disease (lower pedigree) in Family A. In the extended family (upper pedigree), several disorders are found, including autoimmune diseases, anemia, thyroid diseases, liver diseases, and multiple miscarriages; however, the mutation was only associated with liver disease and multiple miscarriages. Two consanguineous relationships are not show: Subject A-IV-17 is a grand-daughter of Subjects A-II-7 and A-II-8, and Subject A-IV-7 is a grandson of Subjects A-III-14 and A-III-15. The genetic status for the immediate family (lower pedigree) and its association with bone marrow failure have been previously reported by us . In smaller pedigrees, (B) TERC nucleotide 341-360 deletion tracked to liver disease in family B, (C) liver disease occurred in a family with a TERC nucleotide 28–34 deletion, and (D) in a family with TERC nucleotide 109–123 deletion. The following are denoted by their abbreviations: common variable immunodeficiency (CVID), aplastic anemia (AA), myelodysplastic syndrome (MDS), acute myeloid leukemia (AML), insulin-dependent diabetes mellitus (IDDM), systemic lupus erythematosus (SLE), idiopathic thrombocytic purpura (ITP), and non-alcoholic steatohepatitis (NASH). (A) Family A proband's bone marrow was hypocellular with isolated regions of normal cellularity (hematoxylin and eosin [H&E] staining; low power magnification). (B) Proband's father's bone marrow smear (Subject A-IV-29), illustrating dysplastic changes and increased number of blasts (H&E, high power magnification). (C) Subject A-IV-23's liver biopsy revealing islands of liver surrounded by zones of necrosis and parenchymal collapse (H&E, low magnification). The necrosis was far enough in the past that hepatocytes have mostly disappeared. In the inset, in some of the areas where hepatocytes were preserved there was still necrosis near the central veins. Little evidence of inflammation exists. (D) Subject's A-IV-25's liver biopsy showing small portal areas and poorly formed veins (H&E, low power magnification). (E) Same liver biopsy exhibiting widened hepatocyte plates on the reticulin stain (high power magnification), but clear changes of nodular regenerative hyperplasia were not seen. (F) The CD34 stain by was positive in sinusoidal endothelial cells consistent with an abnormal proportion of arterial blood flow to the sinuses (immunohistochemistry, low power magnification). (G) Liver biopsy of Subject A-III-11 in which the hepatic architecture is distorted by bridging fibrosis (low power magnification); the inset gives a close-up of the fibrosis. The biopsy revealed moderate inflammation but not elevated levels of plasma cells relative to other inflammatory cells. Other changes included interface hepatitis and cholatestasis. (H) Subject B-II-3's liver biopsy demonstrating portal inflammation with interface hepatitis (H&E, low power magnification). In the inset, Masson staining shows sclerosis around central vein with perisinusoidal fibrosis. (I) Subject B-III-7's liver biopsy with mild, macrovesicular steatosis in a zone 3 distribution. The inset indicates that there is mild lymphocytic portal inflammation with focal interface hepatitis (H&E). (J) Subject C-III-3's liver biopsy shows mild hepatocellular iron accumulation in a pericanalicular pattern; the sinusoidal-lining cells show mild to moderate iron accumulation. The inset illustrates mild variation in hepatocyte nuclear size. (K) Subject C-III-3's reticulin staining exemplifying several zones in which the hepatocyte plates were abnormally widened, consistent with regeneration. (L) Subject E-II-1's liver biopsy revealing some portal areas with mild inflammation and all with poorly formed, slit-like veins (H&E). (M) The reticulin stain showed evidence of nodular regenerative hyperplasia, with zones of plate widening alternating with areas of compression. (N) CD34 stain was abnormally positive in the sinusoidal endothelial cells by immunohistochemistry, indicating abnormal proportion of arterial blood flow to the sinuses. The family has a history of severe liver disease. In our first report , only one paternal aunt (Subject A-IV-23) was found to carry the mutation and have liver disease. She underwent successful liver transplantation at age twenty for a non-A, non-B hepatitis that rapidly evolved to submassive hepatic necrosis with early fibrosis (Fig. 2C). Pathological examination of her liver uncovered massive necrosis without significant hepatitis, which is not specific for a particular etiology. Masson stain revealed early fibrosis in areas of parenchymal collapse and at the edges of portal areas and around central veins. She experienced anemia during pregnancy, but she otherwise had no history of hematologic abnormalities. The paternal aunt with a twenty-year history of aplastic anemia (Subject A-IV-25) and also heterozygous for the mutation developed dyspnea and cough at the age of forty-six after our first report. Spirometry revealed a moderately restrictive pattern and a very severe diffusion defect. Chest computed tomography showed heterogeneous bilateral peripheral lower lung infiltrates, suggestive of pulmonary fibrosis. Prednisone therapy was initiated, but she developed rapidly accumulating ascites, and was diagnosed with non-cirrhotic portal hypertension. She tested negative for hepatitis-associated viruses. Hepatocellular and canalicular enzymes were in the normal range, but her albumin was mildly low (3.6 g/dL) and total bilirubin was mildly increased (1.6 g/dL). Liver biopsy revealed no fibrosis connecting portal areas or perisinusoidal fibrosis. However, portal areas were small without visible vein (Fig. 2D). Hepatocytes showed variation in cell and nuclear size, and varying plate width, consistent with regeneration on reticulin stain (Fig. 2E). CD34 abnormally stained in the sinusoidal endothelial cells around the portal areas and central veins, consistent with an abnormal proportion of arterial blood flow to the sinuses (Fig. 2F). Iron was heavily accumulated, mainly in hepatocytes in zone 1. Her pulmonary function rapidly deteriorated, and she died of respiratory insufficiency. We now expanded the genetic screening to an additional 35 relatives (Fig. 1A). Only one relative tested positive for the mutation, which was a first-cousin-twice-removed (Subject A-III-11) who died at the age of forty-eight with the diagnosis of liver cirrhosis. Her mutational status was demonstrated by sequencing DNA extracted from a paraffin-embedded liver biopsy obtained from hospital archives. She presented with pyoderma gangrenosum and upon workup, her liver enzymes and bilirubin were elevated (AST, 124 IU/L; alkaline phosphatase, 410 IU/L; albumin, 3.8 g/dL; total bilirubin, 3.8 mg/dL), and liver biopsy (Fig. 2G) demonstrated pre-cirrhotic chronic cholestatic liver disease with a differential diagnosis of primary sclerosing cholangitis or a sclerosing cholangitis secondary to autoimmune hepatitis. Serology for hepatitis A and B were negative at the time. She was treated with azathioprine and prednisone for one month, after which time she became severely jaundiced. Her clinical state deteriorated, and she died of fungemia. Her mother, a great-great-aunt to the index patient (Subject A-II-4) died at forty-nine of liver cirrhosis. Unfortunately, no biopsy specimen or clinical records were available. Because her husband was not related to the proband, and her daughter (A-III-11) carried the mutation, she was an obligatory carrier (Fig. 1A). There was no nail dystrophy, leukoplakia, or skin hyperpigmentation, all physical features characteristic of dyskeratosis congenita, in any of the mutation-carriers. Although the index patient and some of his relatives showed a strikingly premature graying of hair, surprisingly this characteristic did not track with the mutation. No history of alcohol consumption or smoking was present in the family. Family B The proband (Subject B-II-3; Fig 1B), a 57-year-old man heterozygous for a novel TERC nucleotide 341–360 deletion not found in 188 controls, presented a six-year history of Barrett's esophagus and recently worsening dysphagia. Imaging displayed a tumor at the gastro-esophageal junction, which was identified as esophageal cancer by biopsy. During the initial evaluation, the patient was pancytopenic, and his bone marrow revealed 5% cellularity with trilineage hypoplasia but without malignant infiltration. His liver enzymes and function tests also were abnormal (alkaline phosphatase, 482 IU/L; albumin, 2.2 g/dL; total bilirubin, 0.9 mg/dL). The tumor was surgically resected, and concurrent liver biopsy showed cirrhosis with foci of lobular inflammation dominated by plasma cells, extensive sinusoidal fibrosis, and Mallory bodies (Fig. 2H). Serological tests for hepatitis-associated viruses were negative. His father (Subject B-I-1) had suffered from gastroesophageal reflux disease and died at age eighty of a poorly differentiated adenocarcinoma at the gastroesophageal junction with hepatic metastasis. The proband's brother (Subject B-II-2) also had gastroesophageal reflux disease and Barrett's esophagus. No pathological specimens were available from his father and his brother for genetic screening. His brother's son (Subject B-III-4) was reported to abuse alcohol and to have cirrhosis, but he tested negative for the mutation. The proband's thirty-two year-old son (Subject B-III-7) is heterozygous for the TERC deletion; he had mild macrocytic anemia during a routine medical visit. Upon further investigation, he was found to have an enlarged fatty liver on ultrasound though his liver enzymes were normal; his bone marrow biopsy was hypocellular (20%), and his liver biopsy showed macrovesicular steatosis with foci of lobular inflammation, portal chronic inflammatory infiltrate, and mild hepatocellular iron accumulation (Fig. 2I). Tests for hepatitis B and hepatitis C viruses were all negative. Pulmonary function test results were within normal limits. He has an eleven-year history of social alcohol consumption. Family C The proband (Subject C-III-1; Fig. 1C), a thirty year-old Caucasian male previously fond to be heterozygous for a TERC nucleotide 28-34 deletion , had a thirteen-year history of moderate aplastic anemia. His father (Subject C-II-2) had a long history of thrombocytopenia and died at age thirty-two of fungal sepsis; autopsy revealed mixed micro and macronodular liver cirrhosis, chronic congestive splenomegaly, esophageal varices, diffuse interstitial pulmonary fibrosis with mild chronic inflammation, and a mildly hypoplastic bone marrow (50%). The index patient's paternal uncle (Subject C-II-1) had died of myelodysplasia. We now screened his brother (Subject C-III-3), with a long history of mild pancytopenia and elevated liver enzymes, and he also tested positive for the TERC deletion. Liver biopsy demonstrated hepatocytes with mild variation in nuclear size, mild hepatocellular iron accumulation in a pericanalicular pattern (Fig. 2J), and several zones displaying abnormally widened hepatocyte plates, consistent with regeneration (Fig. 2K). Family D The proband, a thirty-eight year-old female, was found to be heterozygous for a novel TERC nucleotide 109–123 deletion not observed in 188 controls. She had a six-year history of transfusion-independent pancytopenia first detected during pregnancy. At that time, her hemoglobin was 6 g/dL, and her anemia was unresponsive to erythropoietin treatment. Her bone marrow biopsy was 5% cellular with trilineage hypoplasia and a transient clonal chromosome 1 abnormality [der(1) t(1;1)(p36.2;q;12)]. Her wild-type mother (Subject D-II-2) had history of resolved anemia in the past but is otherwise healthy. Her father (Subject D-II-1), a probable carrier as her mother tested negative, had a 15-year history of hepatitis and liver cirrhosis and died at the age of forty-five years of massive gastrointestinal bleeding. At necropsy, the liver was cirrhotic and microscopically showed moderate fatty change and hyaline Mallory bodies; spider angiomata, jaundice, ascites, and esophageal varices also were present. He had a history of moderate alcohol consumption. Unfortunately, no pathologic specimen was available for further analysis or genetic testing. The index patient's paternal grandfather, (Subject D-I-1) also died of cirrhosis at a young age (pathologic specimens were not available). Family E The proband (Subject E-III-3), a fifteen-year-old male, presented in 1992 with a history of hemorrhage. Laboratory tests revealed pancytopenia and elevated alkaline phosphatase; liver function tests were within normal limits. Bone marrow biopsy showed 25% cellularity with normal cytogenetics. He was treated with androgens without much benefit for his blood counts. As his hematological status deteriorated, he underwent an unrelated hematopoietic stem cell transplant but died of a transplant-related complication. His father (Subject E-II-1) was forty-four years old when first seen, and at the time, he had a 10-year history of thrombocytopenia and leukopenia along with a mildly hypocellular bone marrow. This unusual association between aplastic anemia and liver cirrhotic disease observed in this pedigree and in an additional family led us to describe a “new familial syndrome” in 1997, which appeared to have an autosomal dominant inheritance, but genetic analysis was not available at that time (family E in the present series corresponds to family A in our previous report) . Six years post-presentation, the father developed a nonproductive cough and dyspnea on exertion. Spirometry revealed a reduced diffusion capacity and computed tomography of the chest was consistent with pulmonary fibrosis. During evaluation, some liver enzymes were elevated, and liver biopsy findings were consistent with hepatoportal sclerosis complicated by nodular regenerative hyperplasia. He had no history of ethanol consumption or smoking. Microscopic examination revealed portal areas with chronic inflammatory infiltrate but with no interface hepatitis (Fig. 2L). The portal veins were either missing or slit-like in most of the portal areas. Hepatic architecture was subtly distorted by nodularity with zones of small compressed hepatocytes alternating with zones of large hepatocytes with widened plates (Fig. 2M). CD34 staining was abnormally positive in sinusoidal endothelial cells, mainly around the portal areas and central veins (Fig. 2N). There was sinusoidal dilatation and congestion near central veins. Iron was accumulated within hepatocytes in zones one and two. Ultimately, the respiratory symptoms evolved, and the patient died of respiratory insufficiency. Serological tests for hepatitis B and hepatitis C viruses were negative. Sixteen years post-presentation, the DNA extracted from his paraffin-embedded liver specimen revealed a novel heterozygous TERT S368F mutation not present in 528 healthy controls. The proband's paternal grandfather (Subject E-I-1) died at age sixty-four, and autopsy revealed pulmonary fibrosis, nodular hyperplasia of the liver, and splenomegaly. A paternal aunt (Subject E-II-3) died at age thirty-seven with extensive pulmonary fibrosis and liver cirrhosis. She presented macrocytosis in peripheral blood but normocellular bone marrow. A paternal uncle (Subject E-II-4) died at age twenty-three with massive gastrointestinal bleeding and thrombocytopenia. Autopsy revealed splenomegaly with expanded red pulp, consistent with portal hypertension, and the liver appearance was consistent with portal fibrosis but not cirrhosis. The other paternal uncle (Subject E-II-5) died at age thirty-five years, and autopsy revealed aplastic anemia, splenomegaly, macronodular cirrhosis, and portal fibrosis. No samples were available for the genetic testing of the other affected individuals, including the proband. The proband's two siblings (Subjects E-III-1 and E-III-2), mother (Subject E-II-2), and cousin (Subject E-III-4) appear to be healthy, and each tested negative for the mutation. Further details of the clinical cases described here are available to other researchers on request. Leukocyte Telomere Length and Genotype In the three generations of Family A analyzed by flow-FISH, mutation carriers (Subjects A-III-16, A-IV-23, A-IV-25, A-IV-26, A-V-19, A-V-20, and A-V-23) had total peripheral blood white cell telomere lengths below the shortest percentile of healthy age-matched controls (Fig. 3A). In contrast, the wild-type individuals analyzed (Subjects A-IV-28, A-V-21, and A-V-22) were found to have telomere lengths between those of their heterozygous family members and the 50th percentile. In the additional four families studied, all tested mutation carriers had lymphocyte and neutrophil telomere length below the shortest percentile (Fig. 3A). Interestingly, four non-carriers in families A and E, each of whom had a heterozygous parent, also had short telomeres for their age (Fig. 3A). (A) Telomere length in peripheral-blood total white blood cells (ordinate) from patients and their relatives with or without telomerase gene mutations as a function of age (abscissa) compared to healthy controls. Telomere lengths were measured by flow fluorescence in situ hybridization (flow-FISH). Small gray circles represent the telomere lengths for 400 healthy volunteers , and the curve marks the 50th percentile for healthy controls as a function of age. (B) Telomerase activity - measured by telomeric-repeat amplification assay - of lysates of telomerase-negative WI38-VA13 cells cotransfected with mutated TERC and wild-type TERT expression vectors (2 µg per vector per transfection reaction). Enzymatic activity was normalized to TERC expression as measured by Real Time RT-PCR and to the telomerase activity of wild-type TERC, which was set at 100%. Quadruplicate measurements were performed using one microgram of cell lysate protein per reaction. “Empty vector” refers to protein from VA13 cells transfected with an empty pcDNA3-Flag vector in lieu of TERC. Mutation Functional Analysis We previously showed that TERT K570N results in a complete loss of the ability of telomerase to add hexameric repeats to telomeres and that TERC nucleotide 28–34 deletion reduces telomerase enzymatic activity - each by haploinsufficiency . Here we found by co-transfection experiments that TERC nucleotide 109–123 deletion completely abolished telomerase enzymatic activity, TERC nucleotide 341–360 deletion reduced telomerase activity to approximately one-third of that observed for wild-type TERC, and TERT S368P reduced telomerase activity to approximately 10% of that observed for wild-type TERT (Fig. 3B). Discussion Hepatic disease is mentioned in reviews of dyskeratosis congenita, estimated at about seven percent of patients, but not well characterized and often blamed on hemochromatosis from frequent blood transfusions . A few case reports describe cirrhosis and hepatic cell necrosis in affected individuals in autosomal dominant pedigrees , , . Liver complications are described as more frequent and severe in occasional case reports of bone marrow transplantation in dyskeratosis congenita . In a recent series of 150 patients with idiopathic interstitial pneumonias, four patients (3%) also had cryptogenic liver cirrhosis diagnosed in the sixth or seventh decades of life; none of these four patients, however, carried a telomerase mutation . Our families did not present in childhood or display the characteristic physical anomalies typical of dyskeratosis congenita, except for the premature graying of hair in families A and C that did not track with the mutations. Similar to the spectrum of hematological findings associated with telomerase mutations, ranging from isolated macrocytosis to acute myeloid leukemia, liver disease was heterogeneous in severity and pathology among telomerase-mutation carriers. However, in our comprehensive histopathological analysis, some findings were recurrent: most patients had both inflammatory and fibrotic components; several patients developed cirrhosis; individuals from three different families (A, C, and E) had histological findings consistent with hepatic nodular regeneration (Table 1). In others, iron accumulation was observed, in the absence of a history of blood transfusion or HFE gene mutation. In two instances from different families, CD34 stained positive in sinusoidal endothelial cells, consistent with portal hypertension. Alcohol consumption was observed in affected individuals in families B and D, suggesting a role for environmental factors triggering organ injury; however, serologies for viral hepatitis were negative for all individuals tested. Of interest, more than a decade ago, we identified two families with a “new familial syndrome” characterized by a combination of bone marrow failure and chronic liver disease . A recent study reports a family with pulmonary fibrosis, hepatic nodular regenerative hyperplasia, and aplastic anemia and another describes a case of regenerative hyperplasia and aplastic anemia . Unfortunately, telomerase complex genes were not sequenced in these two families. The wide range in clinical phenotypes associated with telomerase mutations is compatible with the variable genetic penetrance of these mutations and of their effects on telomere shortening. The variety in histopathological findings in liver specimens also suggests that other genetic, epigenetic, and environmental factors are essential for disease development and progression. Our current findings parallel the recently reported association of loss-of-function TERT and TERC mutations with familial idiopathic pulmonary fibrosis , . As was hypothesized for pulmonary fibrosis, shortened telomeres may result from dysfunctional telomere repair, increased cell turn-over, or a combination of factors and contribute to liver fibrosis. In murine models, chronic chemical liver injury is associated with increased regeneration defects and liver cirrhosis in telomerase-deficient mice; restoration of telomerase activity by gene transduction abrogates liver cirrhosis and improves liver function . Short and dysfunctional telomeres in Tert-deficient (and p53-mutated) mice also increase susceptibility to toxin-induced hepatocellular carcinoma . In murine livers lacking telomeric repeat binding factor 2, hepatocytes remained viable and regenerated despite telomeric deprotection and fusion through endoreduplication . Telomeres in liver cells might be maintained by recombination only as long as lengthy telomere repeat tracts are available on some chromosome ends, which is unlikely when telomerase is deficient. Alternatively, the presence of inflammatory cells in liver sections even at very early stages of liver disease (Subject B-III-7, Fig. 2I) suggests that these cells may be the key mediators of pathogenic fibrosis in the setting of telomere shortening. At damaged sites of chronically injured tissues or organs, such as the liver and lung, the release of inflammatory mediators recruits leukocytes to the extracellular matrix . T cells chronically secrete profibrotic cytokines that activate macrophages and fibroblasts, which subsequently stimulate myofibroblasts, which may be of bone marrow derivation. Telomere erosion in neutrophils and lymphocytes, cells critical to the inflammatory response, may elicit an abnormal, sustained profibrotic response. The pathophysiology for hepatic nodular regenerative hyperplasia is unknown. It is associated with vasculitis and exposure to drugs, such as azathioprine; one case in our series (Subject A-III-11) developed fatal liver disease after azathioprine administration. More than five percent of autopsied individuals over eighty years old have nodular regenerative hyperplasia, and portal hypertension is a major complication . Taken together, the link of telomerase deficiency to pulmonary fibrosis and the present findings open a new perspective in the investigation of inflammation, regeneration, and fibrosis and lend support to a crucial role of telomerase in these processes. Up-regulation of telomerase expression and activity may be an attractive therapeutic target for the treatment of fibrotic and regenerative diseases. In family A, both the patient and his father showed clonal evolution of a malignant cell population, a finding also observed in families C and D. Leukemic cells appear to require telomerase activity for proliferation. However, telomerase deficiency increases the presence of critically short telomeres, which are prone to chromosomal instability. Myelodysplasia and acute leukemia are observed in classic dyskeratosis congenita in 1% to 3% of cases , and the risk of developing acute myeloid leukemia in dyskeratosis congenita patients is increased almost 200 fold in comparison to the expected incidence in the population . We recently found an increased rate of constitutional TERT hypomorphic mutations in patients with acute myeloid leukemia . Telomerase mutations were associated with cytogenetic abnormalities, especially trisomy 8 and inv(16). Short telomeres may limit normal stem cell division by inducing proliferation arrest and select for stem cells with dysfunctional telomeres and defective DNA damage responses that are prone to chromosomal instability. Additionally, genome-wide association studies implicated TERT as a strong susceptibility locus (chr 5p15.33) for a large variety of cancers –. Notable characteristics of telomerase mutations in these families include incomplete penetrance and variable expressivity. Some family members have the mutation and short telomeres but appear healthy. Of clinical relevance, these findings indicate that the presence of a telomerase gene mutation and very short telomeres do not necessarily translate into disease. In addition, the phenotypes associated with the mutation are quite disparate in nature. Mutations in telomerase and short telomeres must work in concert with other genetic and environmental factors to result in the diverse phenotypes with which these mutations now have been associated. Methods Ethics Statement Patients and their relatives or guardians of minors provided written informed consent for genetic testing, according to protocols approved by the institutional review board of the National Heart, Lung, and Blood Institute, protocol 04-H-0012 (www.ClinicalTrials.gov identifier: NCT00071045). Clinical investigation was conducted according to the principles expressed in the Declaration of Helsinki. Continuing review application of the protocol was last renewed by NHLBI IRB in February 26th, 2009; the protocol was last amended on September 9th, 2009. Patients who are described in detail have read the manuscript and provided written informed consent for publication. Patients and Controls The probands of each family were referred to the National Institutes of Health Hematology Branch clinic for evaluation of bone marrow failure, and they were diagnosed with aplastic anemia based on conventional bone marrow and blood-count criteria . In Family A, relatives were invited for clinical and genetic evaluation and responded a questionnaire regarding their health status and specifically addressing hematologic, immune, pulmonary, and hepatic diseases. When a clinical history was positive, subjects were requested to provide clinical tests and medical records for further analysis. Medical records also were obtained from deceased relatives who had had liver or pulmonary disease, upon family approval. In the other families, relatives were invited for clinical evaluation at the NIH Clinical Center and medical records from outside institutions were obtained after consent; medical records also were obtained from deceased relatives, upon family consent. Complete blood counts were performed at the NIH Clinical Center for clinically healthy subjects found to carry a telomerase mutation, and the diagnosis of macrocytosis or anemia were established based on the standard laboratory parameters. Patients and their relatives or guardians of minors provided written informed consent for genetic testing, according to protocols approved by the institutional review board of the National Heart, Lung, and Blood Institute. DNA was extracted from peripheral-blood or buccal mucosal cells as previously described ; for Subject A-III-11 and Subject E-II-1, DNA was obtained from paraffin-embedded liver tissue (PicoPure DNA Extraction Kit, Arcturus, Mt View, CA). DNA samples from 188 healthy persons served as controls for TERC and TERT gene mutations: 117 were white (94 from Human Variation Panel HD100CAU, Coriell Cell Repositories [http://locus.umdnj.edu/nigms/cells/humdiv.html], and 23 from SNP500Cancer [http://snp500cancer.nci.nih.gov]), 24 black (from SNP500Cancer), 23 Hispanic (from SNP500Cancer), and 24 Asian (from SNP500Cancer) . An additional 340 healthy controls were screened for TERT gene mutations: 94 blacks from Human Variation Panel HD100AA and 246 anonymous healthy subjects of Hispanic origin (52 percent Peruvians, 28 percent Latin Americans, and 20 percent Pima and Maya Amerindians) . Mutational Analysis TERT and TERC genotyping was performed as previously described . Telomere Length by Flow-FISH Telomere length of peripheral blood leukocytes was measured after red cell lysis with ammonium chloride solution by flow cytometry-fluorescent in situ hybridization (flow-FISH) as reported previously , . Telomerase Enzymatic Activity Wild-type vector was mutagenized by Mutagenex, and sequence was confirmed by direct sequencing of the whole insert, and plasmids were purified using the HiSpeed Plasmid Maxi Kit (Qiagen). Vectors containing disease-associated telomerase mutations were transfected into VA13 cells, and telomerase activity measured as previously described with slight modifications . In the present study, we used a fluorescent telomere repeat amplification protocol (TRAP), XL TRAPeze assay (Chemicon) and fluorescence was measured in a Victor 3 multilabel plate reader (Perkin Elmer). Telomerase activity was measured in total product generated (TPG) units based on the standard curve results and calculated strictly according to the manufacturer's manual and expressed as telomerase activity relative to wild-type TERT and TERC, which was considered 100%. Acknowledgments The authors are greatly indebted to the patients and their family members for their cooperation. We thank Irma Vulto for her excellent technical assistance and Olga Nuñez and Barbara Weinstein for patient care. Author Contributions Conceived and designed the experiments: RTC SJC PML NSY. Performed the experiments: RTC JAR DEK NRP. Analyzed the data: RTC JAR DEK DSS NRP SJC PML NSY. Contributed reagents/materials/analysis tools: RTC DSS VP SJC PML NSY. Wrote the paper: RTC JAR DEK SJC PML NSY. References - 1. Blackburn EH (2001) Switching and signaling at the telomere. Cell 106: 661–673. - 2. Olovnikov AM (1971) [Principle of marginotomy in template synthesis of polynucleotides]. Dokl Akad Nauk SSSR 201: 1496–1499. - 3. Aubert G, Lansdorp PM (2008) Telomeres and aging. Physiol Rev 88: 557–579. - 4. Blasco MA (2007) Telomere length, stem cells and aging. Nature Chemical Biology 3: 640–646. - 5. Blackburn EH, Greider CW, Henderson E, Lee MS, Shampay J, et al. (1989) Recognition and elongation of telomeres by telomerase. Genome 31(2): 553–560. - 6. Calado RT, Young NS (2008) Telomere maintenance and human bone marrow failure. Blood 111: 4446–4455. - 7. Heiss NS, Knight SW, Vulliamy TJ, Klauck SM, Wiemann S, et al. (1998) X-linked dyskeratosis congenita is caused by mutations in a highly conserved gene with putative nucleolar functions. Nat Genet 19: 32–38. - 8. Vulliamy T, Marrone A, Goldman F, Dearlove A, Bessler M, et al. (2001) The RNA component of telomerase is mutated in autosomal dominant dyskeratosis congenita. Nature 413: 432–435. - 9. Fogarty PF, Yamaguchi H, Wiestner A, Baerlocher GM, Sloand EM, et al. (2003) Late presentation of dyskeratosis congenita as apparently acquired aplastic anaemia due to mutations in telomerase RNA. Lancet 362: 1628–1630. - 10. Yamaguchi H, Calado RT, Ly H, Baerlocher GM, Kajigaya S, et al. (2005) Mutations in TERT, the gene for telomerase reverse transcriptase, in aplastic anemia. N Eng J Med 352: 1413–1424. - 11. Armanios M, Chen J-L, Chang Y-PC, Brodsky RA, Hawkins A, et al. (2005) Haplonsufficiency of telomerase reverse transcriptase leads to anticipation in autosomal dominant dyskeratosis congenita. PNAS 102: 15960–15964. - 12. Armanios MY, Chen JJ, Cogan JD, Alder JK, Ingersoll SG, et al. (2007) Telomerase mutations in families with idiopathic pulmonary fibrosis. N Eng J Med 356(13): 1317–1326. - 13. Tsakiri KD, Cronkhite JT, Kuan PJ, Xing C, Raghu G, et al. (2007) Adult-onset pulmonary fibrosis caused by mutations in telomerase. Proc Nat Acad Sci USA 104: 7552–7557. - 14. Dokal I (2000) Dyskeratosis congenita in all its forms. Br J Haematol 110: 768–779. - 15. Xin ZT, Beauchamp AD, Calado RT, Bradford JW, Regal JA, et al. (2007) Functional characterization of natural telomerase mutations found in patients with hematological disorders. Blood 109: 524–532. - 16. Qazilbash M, Liu J, Vlachos A, Fruchtman S, Messner H, et al. (1997) A new syndrome of familial aplastic anemia and chronic liver disease. Acta Haematol 97: 164–167. - 17. Alder JK, Chen JJ, Lancaster L, Danoff S, Su SC, et al. (2008) Short telomeres are a risk factor for idiopathic pulmonary fibrosis. Proc Nat Acad Sci USA 105: 13051–13056. - 18. Rocha V, Devergie A, Socie G, Ribaud P, Esperou H, et al. (1998) Unusual complications after bone marrow transplantation for dyskeratosis congenita. Brit J Haem 103: 243–248. - 19. Talbot-Smith A, Syn WK, MacQuillan G, Neil D, Elias E, et al. (2009) Familial idiopathic pulmonary fibrosis in association with bone marrow hypoplasia and hepatic nodular regenerative hyperplasia: a new “trimorphic” syndrome. Thorax 64: 440–443. - 20. Gonzalez-Huezo MS, Villela LM, Zepeda-Florencio MC, Carrillo-Ponce CS, Mondragon-Sanchez RJ (2006) Nodular regenerative hyperplasia associated to aplastic anemia: a case report and literature review. Ann Hepatol 5: 166–169. - 21. Rudolph KL, Chang S, Millard M, Schreiber-Agus N, DePinho RA (2000) Inhibition of experimental liver cirrhosis in mice by telomerase gene delivery. Science 287: 1253–1258. - 22. Farazi PA, Glickman J, Horner J, DePinho RA (2006) Cooperative interactions of p53 mutation, telomere dysfunction, and chronic liver damage in hepatocellular carcinoma progression. Cancer Res 66: 4766–4773. - 23. Lazzerini Denchi E, Celli G, de Lange T (2006) Hepatocytes with extensive telomere deprotection and fusion remain viable and regenerate liver mass through endoreduplication. Genes Dev 20(19): 2648–2653. - 24. Wynn TA (2007) Common and unique mechanisms regulate fibrosis in various fibroproliferative diseases. J Clin Invest 117: 524–529. - 25. Wanless IR (1990) Micronodular transformation (nodular regenerative hyperplasia) of the liver: a report of 64 cases among 2,500 autopsies and a new classification of benign hepatocellular nodules. Hepatology 11: 787–797. - 26. Alter BP (1994) Dyskeratosis congenita. In: Young NS, Alter BP, editors. Aplastic Anemia, Acquired and Inherited. Philadelphia: W.B. Saunders. pp. 325–339. - 27. Alter BP, Giri N, Savage SA, Rosenberg PS (2009) Cancer in dyskeratosis congenita. Blood 113: 6549–6557. - 28. Calado RT, Regal JA, Hills M, Yewdell WT, Dalmazzo LF, et al. (2009) Constitutional hypomorphic telomerase mutations in patients with acute myeloid leukemia. Proc Nat Acad Sci USA 106: 1187–1192. - 29. McKay JD, Hung RJ, Gaborieau V, Boffetta P, Chabrier A, et al. (2008) Lung cancer susceptibility locus at 5p15.33. Nat Genet 40: 1404–1406. - 30. Wang Y, Broderick P, Webb E, Wu X, Vijayakrishnan J, et al. (2008) Common 5p15.33 and 6p21.33 variants influence lung cancer risk. Nat Genet 40: 1407–1409. - 31. Rafnar T, Sulem P, Stacey SN, Geller F, Gudmundsson J, et al. (2009) Sequence variants at the TERT-CLPTM1L locus associate with many cancer types. Nat Genet 41: 221–227. - 32. Hosgood HD3, Cawthon RM, He X, Chanock SJ, Lan Q (2009) Genetic variation in telomere maintenance genes, telomere length, and lung cancer susceptibility. Lung Cancer Epub ahead of print. - 33. Kaufman DW, Kelly JP, Levy M, Shapiro S (1991) The Drug Etiology of Agranulocytosis and Aplastic Anemia. New York: Oxford. - 34. Yamaguchi H, Baerlocher GM, Lansdorp PM, Chanock SJ, Nunez O, et al. (2003) Mutations of the human telomerase RNA gene (TERC) in aplastic anemia and myelodysplastic syndrome. Blood 102: 916–918. - 35. Packer BR, Yeager M, Staats B, Welch R, Crenshaw A, et al. (2004) SNP500Cancer: a public resource for sequence validation and assay development for genetic variation in candidate genes. Nucleic Acids Res 32: D528–D532. - 36. Baerlocher GM, Vulto I, de JG, Lansdorp PM (2006) Flow cytometry and FISH to measure the average length of telomeres (flow FISH). Nat Protoc 1: 2365–2376.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0007926
Can telomeres predict lifespan? Contents Is the telomere theory of aging valid? We will explore, but first some necessary background. Our body is made up of roughly 30 trillion cells which create a diverse range of tissues and organs. Within almost every cell is a structure known as the nucleus which contains 23 pairs of chromosomes. Each of these chromosomes is made up of millions of “bases” which all together describe our individual genome. The four DNA bases A, T, C and G combine in specific long chains known as genes, which can be read by the cell to produce proteins required for its survival and function. As you can imagine, the specific DNA code which makes up a gene is very important. When errors are introduced, e.g. a DNA base is changed or missed, then the protein coded for by that gene may not be produced at all, may not be as functional or may act in a completely different way altogether. When a specific disease arises from one of these changes they are termed “mutations.” For example, mutations in the TP53 gene are strongly associated with the development of cancers. Therefore, protecting our DNA sequence from damage or degradation is of great importance in our cells if we want to stay healthy. Whilst each cell in our body contains numerous mechanisms to prevent DNA damage; today I’m going to talk about the role of a structural feature of our chromosomes, called telomeres, and importantly how they are associated with aging and longevity. What are telomeres? Telomeres are structural ‘caps’ at each end of a chromosome. You can think of them as “helmets” that protect our essential DNA. Each telomere is comprised of a short repetitive sequence of DNA bases (…TTAGGG…) which is repeated thousands of times. This repetitive sequence doesn’t encode for any genes but has important structural functions including preventing the ends of the chromosomes from fusing with one another, helping to organise the chromosomes in the nucleus of the cell and, importantly, protecting against loss of important DNA sequences required for normal cellular processes (1). Every time a cell divides, on average, a few hundred bases are lost from the ends of the chromosome as they replicate. Telomeres form a buffer so we don’t lose “essential DNA.” Because there are thousands of bases of the repetitive telomere sequence, this sequence is lost instead of important DNA sequence encoding genes. Without telomeres, every time a cell divided, whole genes could be lost, losing critical pieces of the genetic code. When the telomere becomes short enough that a chromosome reaches a ‘critical length’ (where further replication of the cell could result in loss of gene sequence), cell division stops and the cell becomes ‘senescent’ (no longer divides) or undergoes cell death. So it could be said that telomeres provide ‘genomic stability’ (1), but also that their length can be indicative of the relative age of a cell. Telomere maintenance As telomere length is so key in cell survival, a specific enzyme known as ‘telomerase’ exists which functions to maintain telomere length after cell division. Telomerase adds the lost TTAGGG repeat sequence back onto the ends of chromosomes maintaining their length, which is nicely shown in the image below. Image from Wall Street Journal, U.S. Cell-Aging Researchers Awarded Nobel However, telomerase is not found in every cell within the body. Rather, it is expressed specifically in germ and stem cells. Germ cells are those that divide to produce gametes (sperm and egg cells) and stem cells are those cells embedded within tissues which divide to replenish and maintain the cell population within the body. The cells which are produced from stem cell division are termed somatic cells, and they make up the vast majority of cells found within the body. Importantly, telomerase is not expressed in these cells, meaning that telomere shortening will occur eventually leading to the cells becoming senescent or undergoing cell death. Right now I bet you’re asking, if telomerase is so useful why is it not expressed in every cell? Well, senescence and cell death are key steps in maintaining healthy tissue. Somatic cells are typically highly active and often susceptible to damage and so their ‘death’ and replacement is key in maintaining normal tissue function. Additionally, regulating telomerase expression is a good way of preventing uncontrolled cell division. A major hallmark of many cancers is expression of telomerase, whereby cells which shouldn’t, are able to divide, uncontrollably, forming tumors (1). Indeed, several novel anti-cancer therapies are focusing on targeting telomerase activity in order to ‘turn off’ the cancerous cells. As you can see active telomerase leading to ‘immortal’ cells is not always desirable. But is it possible to draw any conclusions from telomere length and telomerase activity in regards to an individuals health and potential lifespan? Telomere length and general aging As shortening telomeres beyond a certain ‘critical length’ leads to cell death, the next logical step for researchers was to investigate whether telomeres shorten with increased age of the whole human body. Most studies measure telomere length in the blood cells, known as leukocytes, or white blood cells, as this is the easiest tissue to get access to via a simple blood sample. The ‘leukocyte telomere length’ (LTL) has also been shown to correlate well with the telomere length in other tissues in the body, meaning the LTL is seen as a good overall indicator of telomere length in an individual as a whole. There are numerous studies on both sides of the fence debating whether telomere length is significantly associated with mortality and age-related diseases. The final final conclusions from a few papers are shown below, with my emphasis added in bold: Leukocyte telomere length had a statistically discernible, but weak, association with mortality, but it did not predict survival as well as age or many other self-reported variables. Although telomere length may eventually help scientists understand aging, more powerful and more easily obtained tools are available for predicting survival. (2) Although telomere length is implicated in cellular aging, the evidence suggesting telomere length is a biomarker of aging in humans is equivocal. More studies examining the relationships between telomere length and mortality and with measures that decline with “normal” aging in community samples are required. These studies would benefit from longitudinal measures of both telomere length and aging-related parameters. (3) The evidence supporting the hypothesis that telomere length is a biomarker of aging is equivocal, and more data are required from studies that assess telomere length, aging-related functional measures, and collect mortality data. An area for future work is the clarification of which telomere length measure is the most informative and useful marker (e.g., mean, shortest telomere, longitudinal change). Nevertheless, in the near future, longitudinal designs will provide important information about within-individual telomere length dynamics over the life span. Such studies will also elucidate whether the relationships between telomere length and aging-related measures vary across the life span. (4) Other emerging biomarkers of aging, such as the ‘epigenetic clock’ which uses DNA methylation (4,5), may prove to be a more insightful biomarker. SNPs, telomere length and TERT genes There are many diseases, ranging in severity, associated with telomere dysfunction, termed ‘telomere syndromes’. In these syndromes genes that are involved in maintaining telomere length are mutated in such a way as to cause the resulting protein to function incorrectly or not at all (reviewed here (6) and also shown in the figure below). Image from: The Short and Long Telomere Syndromes: Paired Paradigms for Molecular Medicine. Stanley et al., 2015. Curr Opin Genet Dev. The lines mark out typical telomere length by age, with the lines divide the population into percentiles based on average telomere length. As you can see most short telomere syndromes are associated with those in the bottom percentile of telomere length. However, we’re interested in seeing if there are any more ‘common’ SNPs, such as the raw SNP data you’re likely to find in direct to consumer offerings, which may have an impact on telomerase activity or telomere length, and if these can have any impact on aging and health. As it stands, results are unclear, with a major reason for this lack of clarity being the high variability in telomere length between individuals. This large variability means that only very large studies are going to identify any important trends. One large genome-wide association study (GWAS) meta-analysis of LTL in over 37,000 European individuals identified seven SNPs which were associated with mean LTL. These SNPs and the ‘effect allele’ are listed in the table below (8). |refSNP ID||Major allele, Minor allele (Risk)||Relative SNP position||Genes in region with known function in telomere biology| |rs11125529||C/A||Synonymous change in MYNN||TERC| |rs10936599||C/T||Within intron of TERT||TERT| |rs7675998||G/A||Downstream of NAF1||NAF1| |rs2736100||G/T||Within intron of OBFC1||OBFC1| |rs9420907||A/C||Upstream of both ZNF257 and ZNF208||–| |rs8105767||A/G||Synonymous change in ZBTB46||RTEL1| |rs755017||A/G||Within intron of ACYP2||–| Table adapted from (8). Synonymous change = no effect on protein. Intron = non-coding section of gene. Upstream = in DNA sequence before the start of a gene. Downstream = in the DNA sequence after the end of a gene. As with all GWAS studies it is important to remember that any findings are associative only. Meaning they correlate with the change the researchers are investigating, but may not be causative of that change. Whilst it may not be possible to link the SNPs above directly to telomere length and aging; one SNP does have a strong link with several telomere related diseases. rs10936599 is located within the telomerase reverse transcriptase (TERT) gene, which forms a key part of the enzyme telomerase. The C>T change in this SNP is associated with shorter telomere length and numerous diseases including several cancers (9,10,11), and heart disease (12). Whilst unable to present any evidence it is thought that shortened telomeres lead to increased DNA damage in these patients, which in turn leads to the development of cancer or heart disease. Take-home message Taking this, together with the weak association between telomere length and mortality/age-related diseases discussed above, it may still be a few more years yet before a concrete link is made between SNP data and how long you are likely to live. But this is cutting edge, exciting science, so expect to see lots of new data soon! Also, researchers are already developing compounds that are capable of extending telomere length in humans (13), which may eventually be used to target telomere associated diseases and disorders, or potentially even extend the life of our cells and tissues.
https://www.mygenefood.com/blog/can-telomeres-predict-lifespan/
Telomerase confers limitless proliferative potential to most human cells through its ability to elongate telomeres, the natural ends of chromosomes, which otherwise would undergo progressive attrition and eventually compromise cell viability. However, the role of telomerase in organismal aging has remained unaddressed, in part because of the cancer-promoting activity of telomerase. To circumvent this problem, we have constitutively expressed telomerase reverse transcriptase (TERT), one of the components of telomerase, in mice engineered to be cancer resistant by means of enhanced expression of the tumor suppressors p53, p16, and p19ARF. In this context, TERT overexpression improves the fitness of epithelial barriers, particularly the skin and the intestine, and produces a systemic delay in aging accompanied by extension of the median life span. These results demonstrate that constitutive expression of Tert provides antiaging activity in the context of a mammalian organism. Similar articles - The many faces of telomerase: emerging extratelomeric effects.Bioessays. 2008 Aug;30(8):728-32. doi: 10.1002/bies.20793. Bioessays. 2008. PMID: 18623070 Review. - Effects of telomerase and telomere length on epidermal stem cell behavior.Science. 2005 Aug 19;309(5738):1253-6. doi: 10.1126/science.1115025. Epub 2005 Jul 21. Science. 2005. PMID: 16037417 - Regulation and effects of modulation of telomerase reverse transcriptase expression in primordial germ cells during development.Biol Reprod. 2006 Nov;75(5):785-91. doi: 10.1095/biolreprod.106.052167. Epub 2006 Aug 9. Biol Reprod. 2006. PMID: 16899651 - The telomerase RNA component Terc is required for the tumour-promoting effects of Tert overexpression.EMBO Rep. 2005 Mar;6(3):268-74. doi: 10.1038/sj.embor.7400359. EMBO Rep. 2005. PMID: 15731767 Free PMC article. - Hormones and growth factors regulate telomerase activity in ageing and cancer.Mol Cell Endocrinol. 2005 Aug 30;240(1-2):11-22. doi: 10.1016/j.mce.2005.05.009. Mol Cell Endocrinol. 2005. PMID: 16005142 Review. Cited by 129 articles - Irisin Improves Autophagy of Aged Hepatocytes via Increasing Telomerase Activity in Liver Injury.Oxid Med Cell Longev. 2020 Jan 2;2020:6946037. doi: 10.1155/2020/6946037. eCollection 2020. Oxid Med Cell Longev. 2020. PMID: 31976032 Free PMC article. - Aging: A cell source limiting factor in tissue engineering.World J Stem Cells. 2019 Oct 26;11(10):787-802. doi: 10.4252/wjsc.v11.i10.787. World J Stem Cells. 2019. PMID: 31692986 Free PMC article. Review. - Mice with hyper-long telomeres show less metabolic aging and longer lifespans.Nat Commun. 2019 Oct 17;10(1):4723. doi: 10.1038/s41467-019-12664-x. Nat Commun. 2019. PMID: 31624261 Free PMC article. - Longevity: Lesson from Model Organisms.Genes (Basel). 2019 Jul 9;10(7):518. doi: 10.3390/genes10070518. Genes (Basel). 2019. PMID: 31324014 Free PMC article. Review. - Hydrogel scaffold with substrate elasticity mimicking physiological-niche promotes proliferation of functional keratinocytes.RSC Adv. 2019 Apr 2;9(18):10174-10183. doi: 10.1039/c9ra00781d. Epub 2019 Apr 1. RSC Adv. 2019. PMID: 31304009 Free PMC article.
https://pubmed.ncbi.nlm.nih.gov/19013273/
Istituto di Genetica Molecolare, Consiglio Nazionale delle Ricerche, Pavia, Italy Telomerase canonical activity at telomeres prevents telomere shortening, allowing chromosome stability and cellular proliferation. To perform this task, the catalytic subunit (telomerase reverse transcriptase, TERT) of the enzyme works as a reverse transcriptase together with the telomerase RNA component (TERC), adding telomeric repeats to DNA molecule ends. Growing evidence indicates that, besides the telomeric-DNA synthesis activity, TERT has additional functions in tumor development and is involved in many different biological processes, among which cellular proliferation, gene expression regulation, and mitochondrial functionality. TERT has been shown to act independently of TERC in the Wnt-β-catenin signaling pathway, regulating the expression of Wnt target genes, which play a role in development and tumorigenesis. Moreover, TERT RNA-dependent RNA polymerase activity has been found, leading to the genesis of double-stranded RNAs that act as precursor of silencing RNAs. In mitochondria, a TERT TERC-independent reverse transcriptase activity has been described that could play a role in the protection of mitochondrial integrity. In this review, we will discuss some of the extra-telomeric functions of telomerase. Keywords: telomerase, TERT, telomere, transformation, cancer, apoptosis, mitochondria, RNA interference Citation: Chiodi I and Mondello C (2012) Telomere-independent functions of telomerase in nuclei, cytoplasm, and mitochondria. Front. Oncol. 2:133. doi: 10.3389/fonc.2012.00133 Received: 31 July 2012; Accepted: 18 September 2012; Published online: 28 September 2012. Edited by: Claus M. Azzalin, Eidgenössische Technische Hochschule Zürich, Switzerland Susan M. Bailey, Colorado State University, USA Reviewed by: Xu-Dong Zhu, McMaster University, Canada Yongmei Song, Chinese Academy of Medical Sciences and Peking Union Medical College, China Copyright: © 2012 Chiodi and Mondello. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.
https://www.telomerescience.com/telomereindependent-functions-of-telomerase-in-nuclei-cytoplasm-and-mitochondria-124.html
The telomerase holoenzyme, which has a highly conserved role in maintaining telomere length, has long been regarded as a high-profile target in cancer therapy due to the high dependency of the majority of cancer cells on constitutive and elevated telomerase activity for sustained proliferation and immortality. In this review, we present the salient findings in the telomerase field with special focus on the association of telomerase with inflammation and cancer. The elucidation of extra-telomeric roles of telomerase in inflammation, reactive oxygen species (ROS) generation, and cancer development further complicated the design of anti-telomerase therapy. Of note, the discovery of the unique mechanism that underlies reactivation of the dormant telomerase reverse transcriptase TERT promoter in somatic cells not only enhanced our understanding of the critical role of TERT in carcinogenesis but also opens up new intervention ideas that enable the differential targeting of cancer cells only. Despite significant effort invested in developing telomerase-targeted therapeutics, devising efficacious cancer-specific telomerase/TERT inhibitors remains an uphill task. The latest discoveries of the telomere-independent functionalities of telomerase in inflammation and cancer can help illuminate the path of developing specific anti-telomerase/TERT therapeutics against cancer cells. Keywords: 2,6-Diaminoanthraquinone(PubChem CID: 8557); 5-Fluorouracil(PubChem CID: 3385); BIBR1532(PubChem CID: 9927531); Cancer; Cisplatin(PubChem CID: 5702198); Doxorubicin(PubChem CID: 31703); Imetelstat(PubChem CID: 71587831); Inflammation; MST-312(PubChem CID: 10385095); N-acetyl-l-cysteine(PubChem CID: 12035); NF-κB; ROS; Telomerase; Therapeutics; epigallocatechin-3-gallate(PubChem CID: 65064). Copyright © 2020 Elsevier Ltd. All rights reserved.
https://pubmed.ncbi.nlm.nih.gov/32109579/
Telomerase is comprised of a protein and a RNA subunit. The protein is Reverse transcriptase Enzyme called TERT which uses the RNA subunit TERC as a template for the addition of nucleotides to the overhang telomere ends of chromosome on the lagging strand of DNA to allow replication to finish and the cell to continue with division. Telomerase adds the repeating sequence TTAGGG to the end of the lagging strand. The size of the RNA subunit varies between species depending on the telomere sequence needed. For example the Tetrahymena thermophila has 159 nucleotide bases in its RNA subunit, whereas budding yeast has an RNA subunit of 1167 nucleotides. In human somatic cells the telomerase enzyme is switched off. This means that with every generation of cell division the chromosome becomes progressively shorter as the telomeres are reduced on the lagging strand. As the lagging strand starts eating into the coding-DNA the cell is no longer able to survive as genes start to be eaten into, meaning they stop expressing their proteins therefore causing the loss of cell functions eventually causing cell death. This allows the body to control the length of cell life. In embryonic stem cells telomerase is activated, allowing them to avoid the end replication problem associated with many rounds of division, however it is inactivated during the process of differentiation. In around 90% of cancers telomerase is reactivated, meaning that cells can divide indefinitely as the DNA does not become damaged. Telomerase plays an important part in ageing, and the prevention of ageing, as the telomerase enzyme is also switched on in germ line and stem cells, which allows them to divide continuously without any loss of DNA so the the cell life is longer.
https://teaching.ncl.ac.uk/bms/wiki/index.php?title=Telomerase&oldid=14179
Dyskeratosis congenita (DC) is an inherited multi-system disorder characterised by muco-cutaneous abnormalities, bone marrow failure and a predisposition to malignancy. Bone marrow failure is the principal cause of mortality and is thought to be the result of premature cell death in the haematopoietic compartment because DC cells age prematurely and tend to have short telomeres. DC is genetically heterogeneous and patients have mutations in genes that encode components of the telomerase complex (DKC1, TERC, TERT, NOP10 and NHP2), and telomere shelterin complex (TINF2), both important in telomere maintenance. Here, we transduced primary T lymphocytes and B lymphocyte lines established from patients with TERC and DKC1 mutations with wild type TERC-bearing lentiviral vectors. We found that transduction with exogenous TERC alone was capable of increasing telomerase activity in mutant T lymphocytes and B lymphocyte lines and improved the survival and thus overall growth of B-lymphocyte lines over a prolonged period, regardless of their disease mutation. Telomeres in TERC-treated lines were longer than in the untreated cultures. This is the first study of its kind in DC lymphocytes and the first to demonstrate that transduction with TERC alone can improve cell survival and telomere length without the need for exogenous TERT. |Item Type:||Article| |Research Areas:||A. > School of Science and Technology > Natural Sciences | A. > School of Science and Technology > Natural Sciences > Molecular Biology group |ISI Impact:||8| |Item ID:||3279| |Useful Links:| |Depositing User:||Dr Colin Casimir| |Date Deposited:||02 Dec 2009 15:50| |Last Modified:||13 Oct 2016 14:16| |URI:||https://eprints.mdx.ac.uk/id/eprint/3279| Actions (login required) |View Item| Statistics Additional statistics are available via IRStats2.
https://eprints.mdx.ac.uk/3279/
Healthcare is becoming both increasingly data driven and automated. Drawing on a largescale review of artificial intelligence developments in the field of mental health and wellbeing, Elizabeth Morrow, Teodor Zidaru-Bărbulescu and Rich Stockley, find that opportunities for patients to influence and inform these future technologies are often lacking, which in turn may heighten disillusionment and lack of trust in them. As such, they propose four priorities for new data driven technologies to ensure they are ethical, effective and equitable for diverse patient groups. As the pandemic has sadly made clear, health policymakers and practitioners often make rapid, complex, life and death decisions. It has also demonstrated the pivotal role of technology in aiding such decisions in infection control and vaccine development. As the world recovers, data-driven artificial intelligence technologies (AI technologies) are poised to transform health services and systems by using large amounts of patient data and machine learning to automate health screening, enable disease identification, employ real-time remote monitoring, deliver precision medicine, and personalise treatment. In April, the European Commission unveiled the world’s first legal framework for AI, which included a comprehensive proposal to regulate “high-risk” AI use cases. This leaves the UK with various options as to if and how it introduces its own AI regulation, as part of a broader AI strategy in 2021. A key point for practitioners and patients is that although AI technologies may appear to be prodigiously accurate, they make generalisations based on likelihood not ‘truth’. An AI ‘guess’, no matter how well informed it is by immense datasets, algorithms, analytics, or models, cannot always be correct for everyone. This highlights the value of patients being front and centre at every step of design and application of new technologies, so that they can alert professionals to times and circumstances when they feel decisions about the technology, or its use are wrong or off the mark. Take for example the recent GP data-sharing debacle in the UK, where weak public engagement has led to a significant public backlash. AI technologies may appear to be prodigiously accurate, they make generalisations based on likelihood not ‘truth’. An AI ‘guess’, no matter how well informed it is by immense datasets, algorithms, analytics, or models, cannot always be correct for everyone. While specific applications of AI stand to be of great benefit to patients, up to now very few AI technologies for health have emerged from an accountable, accessible, or collaborative processes that involves patients or the public in a meaningful way. In a year-long review of the technology landscape, we focused on mental health as it is a public health priority and area where AI technology is moving fast. New mental health diagnosis apps, mental health chatbots, wearable technologies that track health in real-time, virtual reality therapy for dementia patients, and self-monitoring systems to prevent episodes of severe mental health crisis, have been developed internationally and largely without regulation. We found behind the closed doors of high-tech design companies, there are few opportunities for patients and the public to say, ‘I don’t think that is quite right’ or ‘could we try it another way’, going beyond end-user testing and customer feedback once tech is on the market. This deficit is likely to be felt acutely, as new technologies become available to the NHS and clinicians need to integrate intelligence from AI with their professional judgement and the experiences and preferences of the presenting patient. Building trust with the public will ultimately be harder if care shifts to being more remote from already isolated patient groups. More invidiously, as the pandemic has revealed, basing new technologies on data that is skewed towards majority patient groups and assumptions about minority groups could make inequalities worse. As happened pre-pandemic in the well documented case of Samaritan’s Radar. Opening up AI design processes to ensure ethical, inclusive, morally just health care, requires a collaboration between technologists, practitioners, and patients – so that each understands the perspective of the other and appreciates the combined power of perspective sharing in delivering the best possible care for each individual patient. Of the 144 mental health articles we included in the review we found only a small number of design projects for health technologies which advocated co-design methods, user involvement, or patient perspectives. Of the 144 mental health articles we included in the review we found only a small number of design projects for health technologies which advocated co-design methods, user involvement, or patient perspectives. Yet, the growth of digital technologies in the field of mental health is vast. This void of an evidence base for practice is concerning and indicates the growth of AI has considerably outpaced the development of inclusive approaches to enable its safe development and use. However, the proliferation of AI technologies and its high-profile application as part of social distancing measures, has brought new opportunities for the public to contribute to what has essentially been innovation led by high-tech interests. This is particularly the case as these debates shift to focus on technologies developed outside of the regulatory boundaries of publicly funded health systems and their requirements for Patient and Public Involvement (PPI). Based on our review we suggest the following four priorities to build in and promote design justice: Public voice The design of AI technologies for mental health should be reoriented from a focus on addressing a crisis of service demand, towards equipping diverse patients and healthy people with intelligence and healthcare services to empower them to improve their health and wellbeing. By making use of inclusion frameworks based on values of equality, diversity and inclusion (EDI), innovative AI technologies for mental health can be enhanced by democratic discourse and the voices of those directly affected by them. Strengthening the public voice requires agency, education, financial support, awareness, trust and assurance of people’s fundamental rights. Individual’s diversity Members of the public are likely to experience AI technologies through commercial applications, designed at a distance, such as mental health apps. Concerns about data protection and ownership often overshadow seeing the design process as an opportunity for companies to demonstrate community outreach and a commitment to social justice. Patient and public involvement has also been hindered by assumptions about the public’s willingness or ability to engage in technical debates. Reaffirming public engagement, no matter where tech is being developed, can only be achieved through the different channels for regulatory, governance, and public accountability, but it is vital to reframing the relationship between the diverse individual users and designers of AI technologies. Participatory co-design The design of AI technologies is an ethical and political issue, as much as a technical one. Equity issues cannot be resolved by extending user experience testing to the beginning of the pre-design process. To gain acceptance, the design process of AI technologies and the decisions made in relation to how they serve different groups in society should be open to scrutiny. This can be supported by guidelines for design and best practice in AI-assisted care together with evidence from participatory co-design in healthcare and research. There are implications for professional training and education in AI, including interdisciplinary learning, talent pipelines and human capital development strategies that value public engagement. Open knowledge development and exchange Public open access to information and research evidence is important in relation to building shared knowledge about new technologies and developing public awareness about the potential benefits of their engagement, alert systems, and risk. Discussion about collective data ownership arrangements and the role of patient and public work in the production and interpretation of data need to be more accessible and inclusive, if they are to reflect the diversity of public interests and respond to changes in public opinions over time. These priorities affirm direct and ongoing public involvement as a central strategy for creating ethical AI assisted health care. It was the headline message of the 2019 State of the Nation Survey on accelerating AI in health and care – “Ground AI in problems as expressed by the users of the health system”. The NHS AI Lab is also encouraging design teams (£140M for AI Health and Care Awards) to consider inequalities in health outcomes, and we hope the new NHS AI Ethics Initiative will give special and focused attention to mental health as a priority area. Given that lack of transparency about patient data collection and use remains a major concern for UK mental health activists, and now the UK public, patient engagement in the production of AI technologies could help to build trust and understanding about what all this data is for. For patient advocates, now is the time to find ways to influence the ethical commissioning, design, regulation, selecting/purchasing, implementation, and evaluation of these powerful future technologies. This post draws on the authors’ co-authored open access paper, Ensuring patient and public involvement in the transition to AI-assisted mental health care: A systematic scoping review and agenda for design justice, published in Health Expectations. The authors thank the London School of Economics for providing funding for open access publication. We are also grateful to NHS England and NHS Improvement for funding that made this research possible. Note: This review gives the views of the authors, and not the position of the LSE Impact Blog, or of the London School of Economics. Image Credit: National Cancer Institute via Unsplash.
https://blogs.lse.ac.uk/impactofsocialsciences/2021/06/18/4-priorities-to-reaffirm-patient-voice-in-the-coming-era-of-ai-healthcare/
AI Governance: Artificial Intelligence Model Governance What is AI governance? At the federal level (e.g., United States government), Artificial intelligence governance is the idea of having a framework in place to ensure machine learning technologies are researched and developed with the goal of making AI system adoption fair for the people. While we’ll be focusing on the corporate implementation of AI Governance in this article, it is essential to understand AI governance from a regulatory perspective, as laws at the governmental level influence and shape corporate AI governance protocols. AI governance deals with issues such as the right to be informed and the violations that may occur when AI technology is misused. The need for AI governance is a direct result of the rise of artificial intelligence use across all industries. The healthcare, banking, transportation, business, and public safety sectors already rely heavily on artificial intelligence. The primary focus areas of AI governance are how it relates to justice, data quality, and autonomy. Navigating these areas requires a close look at which sectors are appropriate and inappropriate for artificial intelligence and what legal structures should be involved. AI governance addresses the control and access to personal data and the role morals and ethics play when using artificial intelligence. Ultimately, AI governance determines how much of our daily lives can be shaped and influenced by AI algorithms and who is in control of monitoring it. In 2016, the Obama Administration announced the White House Future of Artificial Intelligence Initiative. It was a series of public engagement activities, policy analysis, and expert convenings led by the Office of Science and Technology Policy to examine the potential impact of artificial intelligence. The next five years or so represent a vital juncture in technical and policy advancement concerning the future of AI governance. The decisions that government and the technical community make will steer the development and deployment of machine intelligence and have a distinct impact on how AI technology is created. In this article, we’re going to focus on AI governance from the corporate perspective and see where we are today with the latest AI governance frameworks. Table of Contents Why Do We Need AI Governance? Who is Responsible for Ensuring AI is Used Ethically? What are the 4 Key Principles of Responsible AI? How Should AI Governance be Measured? What are the Different Levels of AI Governance? Why You Should Care About AI Model Governance? What To Look For In An AI Governance Solution? Model Governance Checklist Your AI Program Deserves Liberation. Datatron is the Answer. Why Do We Need AI Governance? To fully understand why we need AI governance, you must understand the AI lifecycle. The AI lifecycle includes roles performed by people with different specialized skills that, when combined, produce an AI service. Each role contributes uniquely, using different tools. From origination to deployment, generally, there will be four different roles involved. Business Owner The process starts with a business owner who defines a business goal and requirements to meet the goal. Their request will include the purpose of the AI model or service, how to measure its success, and other constraints such as bias thresholds, appropriate datasets, and levels of explainability and robustness required. The Data Scientist Working closely with data engineers, the data scientist takes the business owner’s requirements and uses data to train AI models to meet the requirements. The data scientist, and expert in computer science, will construct a model using a machine learning process. The process includes selecting and transforming the dataset, discovering the best machine learning algorithm, tuning the algorithm parameters, etc. The data scientist’s goal is to produce a model that best satisfies the business owner’s requirements. Model Validator The model validator is an independent third party. This role falls within the scope of model risk management and is similar to a testing role in traditional software development. A person or company in this role will apply a different dataset to the model and independently measure metrics defined by the business owner. If the validator approves the model, it can be deployed AI Operations Engineer The artificial intelligence operation engineer is responsible for deploying and monitoring the model in production to ensure it operates as designed. This may include monitoring the performance metrics as defined by the owner. If some metrics are not meeting expectations, the operations engineer is responsible for informing the appropriate roles. With so many roles involved in the AI lifecycle, we need AI governance to protect the companies using AI solutions in emerging technologies and to protect the consumers using AI technologies across the entire global community. Who is Responsible for Ensuring AI is Used Ethically? With the number of roles involved in the AI lifecycle, a question arises: who should be responsible for AI governance? First, CEOs and senior leadership in corporate institutions are the people ultimately responsible for ethical AI governance. Second in line in terms of responsibility comes the board of an organization who is responsible for audits. The general counsel should have the responsibility for legal and risk aspects. The CFO should be aware of the cost and financial risk elements. The chief data officer (CDO) should take responsibility for maintaining and coordinating an ongoing evolution of the organization’s AI governance. With data critical to all business functions, customer engagement, products, and supply chains, every leader needs to be knowledgeable about artificial intelligence governance. Without clear responsibilities, no one is accountable. What are the 4 Key Principles of Responsible AI? Because AI governance is about protecting both organizations and the customers they serve, defining the key ethical principles of responsible AI is a helpful way to guide policy. In addition, because as a society we’re still in the early stages of AI development and new AI projects emerge daily we must learn from past mistakes and adjust accordingly. Sometimes machine learning algorithms introduce unintended results as seen in these high-profile cases. Microsoft Tay After its launch, Microsoft’s Tay Twitterbot quickly gained 50,000 followers and created more than 100,000 tweets. But after only 24 hours of machine learning, Tay turned into a PR nightmare with some of its offensive tweets and had to be taken offline. COMPAS Recidivism Algorithm COMPAS, is a commonly used software in the US to guide criminal sentencing. It was exposed by ProPublica to be racially biased. Black defendants were twice as likely to be misclassified as white offenders. Apple Card Bias Tech magnates David Heinemeier Hanson and Steve Wozinak called out Apple for discrimination when their spouses were offered substantially lower credit limits despite having a shared banking and tax history. Facebook Campaign Ads Facebook sparked a public outcry that charged the company with putting profits before people and democracy after the company refused to police political ads on its AI-driven network. These are just some examples of many. None of these companies set out with bad intentions, but these cases show the importance of AI governance and active monitoring. The Four Key Principles of Responsible AI Have Empathy For the Microsoft example, it was Tay’s lack of empathy that caused the issue. The bot was not engineered to understand the societal implications of how it was responding. There were no guardrails in place to define the boundaries of what was acceptable and what might be hurtful to the audience interacting with the bot. The natural language processing error led to a big headache for the company. Control Bias AI algorithms make all decisions based on the data at their disposal. In the case of COMPAS, although the developers had no intention of creating a racist AI, the bias it uncovered was a reflection of the bias that exists in the natural world justice and sentencing system. Companies need to regulate machine learning training data and evaluate the impact to catch bias that might have been unintentionally introduced. Provide Transparency With negative publicity, it can be a challenge to convince consumers that AI is being applied responsibly. The Apple Card issue really wasn’t that Apple’s decision-making was biased; it was that Apple customer service was unsure how to answer the customer’s concerns. Companies must be proactive about certifying their algorithms, clearly communicating their policies on bias, and providing a clear and transparent explanation of the problem when it occurs. Establish Accountability Facebook took a lot of heat for its refusal to hold itself accountable for the quality and accuracy of the information being shown in its ads. Regulation around technology issues is always a few years behind the problem, so regulatory compliance isn’t enough. Companies must proactively establish and hold themselves accountable to high standards to balance the great power AI brings. How Should AI Governance be Measured? You can’t manage what you don’t measure. Lack of properly measuring AI models puts organizations at risk. So the question becomes which measures are important? In order to answer that, an organization must be clear on their definition of AI governance and who in the organization is accountable, and what their responsibilities are. Many measures or metrics for AI governance will be standardized for all organizations through government regulations and market forces. Organizations also need to consider other measures that will support their strategic direction and how the company operates on a daily basis. Some essential facts and data-driven KPIs organizations should consider include: Data Measure for the lineage, provenance, and quality of the data. Security Data feeds around model security and usage. Understanding of tampering or improper usage of AI environments is critical. Cost/Value Define and measure KPIs for the cost of data and the value created by the data and algorithm. Bias KPIs that can show selection bias or measurement bias are a must. Organizations need to monitor bias through direct or derived data continuously. It will also be possible to create KPIs that measure information on ethics. Accountability Get clarity of individual responsibilities and when they used the system, and for what decisions. Audit The continuous collection of data could form the basis for audit trails and allow third parties, or the software itself, to perform continuous audits. Time Measurements of time should be a part of all KPIs, allowing for a better understanding of the model over specific time periods. These are just some of the KPIs for organizations to consider. The sooner measurements are in place, the better they can evolve for a particular organizations’ environment and goals and be incorporated into software. AI governance should be and will likely be a mandatory part of all AI strategy and machine learning environments. What are the Different Levels of AI Governance? Level Zero: No AI Governance At level zero, each AI development team uses its own tools, and there are no documented centralized policies for AI development or deployment. This approach can provide a lot of flexibility and is common for organizations just getting started with AI. However, it comes with potential risks once the models are deployed into production. Because there is no framework, it’s impossible to evaluate risk. Companies working at level zero have a difficult time scaling AI practices. Hiring more data scientists does not lead to a ten-fold increase in AI productivity because of too many inconsistencies. Level One: AI Governance Policies Available Many organizations have already established some level of AI governance but have not developed a fully mature AI governance framework. Most companies are around level two or three, and with a little help, could have a fully mature AI governance system that’s fully automated, saving their organization a substantial amount of resources Level Two: Create a Common Set of Metrics for AI Governance This level builds upon level one by defining a standard set of acceptable metrics and monitoring tools to evaluate models. This brings consistency among all AI teams and enables metrics to be compared to different development lifecycles. A common monitoring framework is introduced that allows everyone in the organization to interpret the metrics the same way. This reduces risk and improves transparency to better make policy decisions or troubleshoot reliability if issues arise. Companies operating at level two usually have a central model validation team upholding the policies laid out by the enterprise during the validation process. Level two is where organizations start to see productivity gains. Level Three: Enterprise Data & AI Catalog Level three leverages the metadata from level two to ensure all assets in a model’s lifecycle are available in an enterprise catalog with data quality insights and provenance. With a single data and AI catalog, the enterprise can trace the full lineage of data, models, lifecycle metrics, code pipelines and more. Level three also lays the foundation for making connections between the numerous versions of models to enable a full audit. It also provides a single view to a CDO/CRO for a comprehensive AI risk assessment. Organizations at this level are able to clearly articulate risks related to AI and have a comprehensive view of the success of their AI strategy. Level Four: Automated Validation and Monitoring Level four introduces automation into the process to automatically capture information from the AI lifecycle. This information significantly reduces the burden on data scientists and other role players, freeing them from manually documenting their actions, measurements, and decisions. This information also enables model validation teams to make decisions on an AI model, as well as allowing them to leverage AI-based suggestions. At this level, an enterprise can significantly reduce the operations effort in documenting data and model lifecycles. It removes risks from mistakes along the lifecycle in terms of metrics, metadata, and versions of data or model being excluded. Companies at level four start to see an exponential increase in productivity as they’re able to consistently and quickly put AI models into production. Level Five: Fully Automated AI Governance Level five builds on automation from level four to automatically enforce enterprise-wide policies on AI models. This framework now ensures that enterprise policies will be enforced consistently throughout every model’s entire lifecycle. At this level, an organization’s AI documentation is produced automatically with the right level of transparency through the organization for regulators and customers. This level enables the team to prioritize the riskiest areas for a more manual intervention. Companies here can be highly efficient in their AI strategy while maintaining confidence in their risk exposure. Why You Should Care About AI Model Governance? As demonstrated by the case examples in this article, many times AI models are simply not making the right decisions. Even if a model is trained correctly, over time, it will experience drift – it will change. Because of the inevitable drift, you need a way to monitor and capture that change so you can see what changed and make adjustments to the model. Although most of model governance today focuses on model risk management around compliance, there’s an emerging trend towards social responsibility. Although the issues are not in violation of any government regulations, the financial fallout from poorly behaving models generates bad press and significantly impacts organizational bottom lines. If these companies had been using an advanced production risk management governance tool like Datatron, they could have avoided costly AI errors and public embarrassment and financial loss. What To Look For In An AI Governance Solution? The best AI model governance solutions focus on simplifying the entire process of monitoring and compliance assurance while not causing significant disruptions to existing workflows. They allow key stakeholders in the organizations to have visibility into what the models are doing at all times. The first line of defense starts with the data scientist, whose job is to ensure model validation using some kind of explainability tool, which is necessary from a developmental perspective but not good enough for actual production validity. The second line of defense helps with the production aspect by monitoring elements while the model is working. The third and most sophisticated line of defense has the capability to provide a detailed audit report that shows what the models are doing with only the output data. Model Governance Checklist When choosing an AI governance solution, choose a platform that goes beyond just compliance control. All enterprise-level businesses need to be confident about their model inventory, model development, and model management practices. Make sure your AI monitoring and governance solution offers the following:
https://datatron.com/ai-governance/
Are we ready for judicial AI? Introduction The number and variety of tools and services that leverage Artificial Intelligence (AI) capabilities that humans benefit from in their everyday lives have continuously accelerated in the past years. AI is present in different industries, however, in this article, I focus my evaluation on the advantages and disadvantages of applying AI for juridical decision-making considering the ethical perspective. The decisions made by a court are supported by legal reasoning and previous trial outcomes, acknowledging the transformation of social ethics. While humans consider emotional aspects when making decisions, AI relies exclusively on data and a predefined algorithm, hence the opinion of a ‘Judicial AI (Sourdin T, 2018)’ is less biased. While these systems might perform better than humans, it is fundamental to implement explainable models as they have a tremendous impact on people’s lives. AI in the court How could AI support judges? The information shared and discussed at the court is recorded and documented, and the judge analyzes the collected materials together with learnings from previously solved cases before sentencing. In a criminal case, the amount of punishment is impacted by a risk assessment, and how likely the individual would re-offend (McKay, 2020). Although it seems the decision is based on a thorough analysis of the determined facts, humans are prejudiced against several demographics. Judges observe the motives and the cultural background of offenders, which result in weight in the sentence. AI however depends on data that is provided during training and the algorithm that is implemented. It is capable of mimicking human behavior and senses, nonetheless, these solutions are not meant to replace humans with machines but rather support judges with recommendations. For example, In Mexico, an AI system, Expertius provides advice to judges to support their decision on whether an individual can be granted with pension (Carneiro et al., 2014). Analysis of written documents for a human is time-consuming and requires a tremendous number of manual activities. While looking for a piece of particular information in several files that are consisted of numerous folders good overview of their content is required. Recent innovations in Natural Language Processing (NLP) enable effortless evaluation of documents; NLP solutions can identify key phrases, detect language, extract entities, or return the sentiment of the text. Moreover, NLP solutions can be trained to read forms and generate tabular data of them (Aletras et al., 2016). While understanding the advantages of AI, organizations from around the globe are facing ethical issues, some with a potential to cause severe harm whilst using and implementing AI solutions, “regarding algorithms as proprietary products, potentially with in-built statistical bias as well as the diminution of judicial human evaluation in favor of the machine (Angwin et al., 2016)”. Organizations that were interviewed for a Capgemini report observed „reasons including the pressure to urgently implement AI, the failure to consider ethics when constructing AI systems, and a lack of resources dedicated to ethical AI systems (Moore, 2019)”. Ethical considerations To gain accurate knowledge about ethical AI, I collected several resources on the topic, consequently, I do not exclusively rely on my background with Microsoft’s Responsible AI Principles as Microsoft AI Most Valuable Professional (MVP). Scientific journals discussing the potential biases and moral issues of AI systems have existed since the late twentieth century. In Australia, a report is written (Dawson D et al., 2019) to discuss the needs and values of ethical principles together with supporting best practices and use cases, “to ensure that existing laws and ethical principles can be applied in the context of new AI technologies”. Eight principles are identified in the paper, including ‘Generates net-benefits’, ‘Do no harm’, ‘Regulatory and legal compliance’, ‘Privacy protection’, ‘Fairness’, ‘Transparency & Explainability’, ‘Contestability’, and ‘Accountability’. According to the response from the Society on Social Implications of Technology (SSIT) Australia on the discussion paper, there was a serious need to apply ethics, especially for AI, since ethical frameworks were underdeveloped. SSIT also formed concerns around the principles, such as “human values are only obliquely referenced in the Core Principles (Adamson et al., 2019)”. Microsoft’s Responsible AI Principles are similarly identified in the discussion paper; they are defined as cornerstones that put people first, meaning that engineers are working to ensure that AI develops in such a way that can be benefitted in society while warranting people’s trust (Demarco, n.d.). Privacy and security Data includes personal information including details about previous sentences of judges, various arguments, and legal documents that are meant to support a particular case. This data must be protected, by complying with privacy laws that require transparency about ingestion, usage, storage, and how consumers would use the data. Fairness Before using the data for model training, it is fundamental to reflect diversity and biases in the data. (Eckhouse et al., 2019) discuss the importance of embedded bias in statistical algorithms for risk assessment, and whether the system predicts a similar level of racially biased judgments when the input data is derived from a prejudicial criminal justice system. Algorithms such as STATIC-99R “do not differentiate between the severity of offenses that might be committed (McKay, 2020)”, meaning, the algorithm’s prediction results are not affected by the level of harassment. Reliability and safety (Contestability) A ‘Judicial AI’ should make the same decisions based on unknown information as on experienced scenarios during training, nevertheless, humans must have the final authority to sentence. Transparency (and Explainability) Although the goal is to achieve high accuracy, another essential principle is to ensure that users can trust and accept AI solutions. Domain experts provide insights about the function of legal reasoning and judicial decisions are made. Additionally, they can identify potential performance issues, biases, exclusionary practices, or unintended outcomes. Another concern while discussing transparency is that “the actual algorithm, its inputs or processes may be protected trade secrets so that individuals impacted by the algorithmic assessment cannot critique or understand the determination (Carlson, 2017)”. Conclusion While AI solutions can make judicial decisions quick, judges will not trust such sentences entirely until they do not understand how the results are calculated (Zerilli, 2020). The ethical principles discussed above could allow developers and end-users to understand the algorithms, protect the processed data and control the behavior of a ‘Judicial AI’ (McKay, 2020). References Adamson, G., Broman, M. M., Jacquet, A., Rigby, M., & Wigan, M. (2019). Society on Social Implications of Technology (SSIT) Australia response to the Discussion Paper on Artificial Intelligence: Australia’s Ethics Framework. https://ethicsinaction.ieee.org/ Aletras, N., Tsarapatsanis, D., Preoţiuc-Pietro, D., & Lampos, V. (2016). Predicting judicial decisions of the European court of human rights: A natural language processing perspective. PeerJ Computer Science, 2016(10). https://doi.org/10.7717/peerj-cs.93 Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Carlson, A. M. (2017). The Need for Transparency in the Age of Predictive Sentencing Algorithms. http://www.wsj.com/articles/wisconsin Carneiro, D., Novais, P., Andrade, F., Zeleznikow, J., & Neves, J. (2014). Online dispute resolution: An artificial intelligence perspective. Artificial Intelligence Review, 41(2), 211–240. https://doi.org/10.1007/s10462-011-9305-z Dawson D, Schleiger E, Horton J, Mclaughlin J, Robinson C, Quezada G, Scowcroft J, & Hajkowicz S. (2019). Artificial Intelligence: Australia’s Ethics Framework. https://consult.industry.gov.au/ Demarco, J. (n.d.). We need rules of the road for responsible AI and data science. Eckhouse, L., Lum, K., Conti-Cook, C., & Ciccolini, J. (2019). Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment. Criminal Justice and Behavior, 46(2), 185–209. https://doi.org/10.1177/0093854818811379 McKay, C. (2020). Predicting risk in criminal procedure: actuarial tools, algorithms, AI and judicial decision-making. Current Issues in Criminal Justice, 32(1), 22–39. https://doi.org/10.1080/10345329.2019.1658694 Moore, M. (2019). Press contact. www.capgemini.com. Sourdin T. (2018). JUDGE V ROBOT? ARTIFICIAL INTELLIGENCE AND JUDICIAL DECISION-MAKING. UNSW Law Journal. Zerilli, J. (2020). Algorithmic Sentencing: Drawing Lessons from Human Factors Research.
https://codewitheve.azurewebsites.net/are-we-ready-for-judicial-ai/
Belgrade, Serbia – An international workshop recently here brought together various stakeholders including public sector organizations, civil society organizations, media industry and academia. Under the auspices of the Ministry of Culture and Media of Serbia, the workshop/training on “Artificial Intelligence (AI) and its implications for Media and Information Literacy (MIL) and Freedom of Expression (FoE)”, was held on January 22-24 earlier this year. Deciding to focus on the IT sector as one of the most significant pillars of economic growth and smart living, the Government of the Republic of Serbia organized the workshop as an opportunity for media actors to delve more deeply into the challenging digital framework, and study its impact on our ability to participate responsibly in the media environment. Freedom of expression as basic precondition of any democracy has been increasingly put under test nowadays by emerging technologies including Artificial Intelligence (AI). Governments worldwide are catalyzing efforts to respond to the AI challenges in the media ecosystem such as determining content and access to information by content personalization and customization, monitoring and targeting. While content creation and moderation tools, facial and speech recognition, smart search systems, editorial curating assistants, can make media professionals’ daily routine easier, they can nevertheless raise various issues of human rights including, freedom of expression. With this backdrop, on the regulatory level, dialogue through international organizations such as the UN, the UNESCO, the OSCE and the Council of Europe (CoE) is stepping up in order to find common denominator for dramatic changes that AI has brought to the media environment. The Government of the Republic of Serbia focuses on the IT sector as one of the most significant pillars of economic growth and smart living. Its commitment to regulating AI has been demonstrated recently by the adoption of AI strategy on the national level. In addition, the Ministry of Culture and Media, in cooperation with international organizations, continuously monitors the directions of information and media development, and at the same time recognizes the importance of strengthening the capacities of all actors in the field of media and information in order to keep up with international trends and take timely actions. Already while doing a search on the internet, algorithms affect search results, and personalizing content using artificial intelligence greatly affects a person’s right to be informed, which often weakens the pluralism of ideas that one can be exposed to. It is necessary for us to better understand this area, and in particular its impact on freedom of expression. Prior to arranging this workshop, a needs assessment exercise was performed and targeted program created by a panel of seven reputed experts, led by Professor I. Kushchu, the CEO of TheNextMinds. The workshop contained practical and theoretical insights on fundamentals of AI, Machine Learning and participants also acquired skills in understanding the interactions of AI and human rights, in general, and freedom of expression in particular. They have also been developing skills to deal with issues related to regulation of emerging technologies. The workshop helped participants identify how AI tools and techniques may influence media and information landscape for now and in the future, and how they can get prepared for it for better MIL education. More specifically the following topics were covered: • A Nontechnical coverage of AI Foundations, Machine Learning together with data science, IoT, the Cloud and automation. • Common AI practices influencing Education, Human Rights, and Ethics *Human Centric AI *Content generation and content dissemination *Content monitoring, evaluating and manipulations *Recommender systems *Personalization and customization • How AI and the emerging technologies are influencing the MIL, HR, FoE and Ethics. *Evidences from MIL: How AI Boost to MIL will Practically Empower Citizen and How AI Can Support MIL *Evidences from Ethics of developing and using responsible AI systems *Evidences from HR and FoE: AI Disruption and Human Rights, and Freedom of Expression, *Approaches to AI Regulation- Comparison of USA, China, EU countries, and others, FoE protection in artificial intelligence environment- legislative, regulatory and policy responses Impact: This capacity building exercise could set an inspiring example for other public sector organizations in Serbia but also internationally thus triggering more AI related activities in other parts of the government. This can also incite confidence in the public sector in working with technology and AI experts in explaining needs and requirements and evaluating suggested solutions. This could lead to better combining activities and implementations relevant to AI and making more skillful and informed decisions. The Next Steps The experience of this significant workshop may lead to various important initiatives to promote and monitor development of AI in Serbia in such a way that AI is human centric and helps empower the members of the society through sensitivity to MIL, Ethics and FoE. Moreover, AI technologies follow and fit EU recommendations especially in terms of societal needs and regulations. In order to perform such initiatives, the Ministry hinted that it would take a lead role in partnering various relevant organizations and build upon knowledge and experiences of some of the participants who took part in this workshop.
https://www.america-times.com/serbia-showcases-ai-media-information-field/
In this era of the Fourth Industrial Revolution, there is much talk about artificial intelligence (AI). Equally widespread concern about AI, its role in society, and the implication of the society using AI. In an interview with Korea Biomedical Review, Jarom Britton, regional attorney in Microsoft’s Health, Education and Public Sector in Asia, claims that Microsoft doesn’t have all the answers on these subjects but can make some suggestions within the broader industrial perspective. “We are seeing governments taking an interest in developing policies that regulate AI, but at the same time it is such a new technology that we’re not always sure on how to approach it,” Britton said. “I think before we start regulating and introducing laws it is essential that we take a step back and understand the values that are implicated through AI that we need to protect and the principles that we as a society can agree upon.” Such a discussion needs to include industry officials, governments, IT officials, academia, philosophers, economists and professions that the IT technology industry has not traditionally utilized, he added. As of now, Microsoft has come up with six ethical principles for the development of artificial intelligence (AI) useful to humans -- fairness, reliability/safety, privacy, inclusiveness, transparency, and accountability. Britton stressed that Microsoft believes that these six principles can help AI and humans collaborate as the success of healthcare AI platform will hinge on the outcome of the collaborations. |Jarom Britton, regional attorney in the Health, Education and Public Sector in Asia at Microsoft, explains Microsoft’s six principles in developing AI and other questions related to AI development, during an interview with Korea Biomedical Review at Severance Hospital, Sinchon-dong, Seoul, on Thursday.| Question: Will you explain who you are and what you do at Microsoft? Answer: My role is a relatively new position at Microsoft. Just over a year ago, Microsoft realigned its sales team to focus by industry, and the legal department in Asia decided that it needs some specialization as well in the industry market. I take care of the health, education and government sectors. My role is not a typical lawyer’s role. I spend a lot of time meeting with customers, healthcare organizations and government agencies to help them understand how they could move to Microsoft’s cloud service or other technologies and use them in line with the regulatory requirements. I also take feedback from customers and send it back to the company and recommend changes at Microsoft. Q: Could you tell us about the “six principles” Microsoft has in developing technology for humans? A: Microsoft has six principals that it has discussed both internally and externally with experts. The six principles include fairness, reliability/safety, privacy, inclusiveness, transparency, and accountability. These six principals are our initial thoughts that we believe are relevant to the development of AI. Underlying all these principles is the concept of putting the humans at the center of AI development. Q: Please explain each of them in detail. A: Regarding fairness, which resonates well within the healthcare space, the principal is about eliminating bias such as false positives and negatives, and the error rate in clinical studies and diagnosis. AI is only as good as the data it is trained on, and if the data is biased or incomplete, it can result in errors in the outcome. The second point reliability/safety revolves around on how the AI can interpret the situation correctly. That means making sure the AI is adequately trained but also making sure that developers are monitoring its development on an ongoing basis so that they can see how it’s performing and measure such performance in the real world. At the same time, it is about recognizing that AI is not always going to perform flawlessly. In such cases, developers need to put a human back in charge as soon as possible and provide them with the information on what going wrong so that they can get the AI back on track. For privacy and security, the two needs to be a significant concern because in AI, the more data we use, the more useful the platforms are. Therefore, developers need to have a firm grasp on what they are doing with the data and how are they controlling it. Some people feel that they need to give up privacy to be able to get services from the AI technology. I don’t think that necessarily has to be the case, but I do believe that there need be control towers that can make sure the fundamental principles of privacy can be protected. At Microsoft, we established an AETHER (AI and ethics in engineering and research) committee that consists of teams from various departments that monitors AI projects on an ongoing basis to make sure that the project is upholding the ethical values that we have identified as being important. The committee also has powers to stop any plans that violate such values or recommend changes to fit the company’s agenda. For inclusiveness, any new technology can be inclusive or exclusive. There are people in society that have historically been more marginalized or have not been able to participate as entirely in the economy, society or community as they like or as others would want. Therefore, it is essential whenever a developer develops an AI solution they need to keep in mind accessibility so that their development can empower people rather than disempower them or exclude them. Transparency is the significant value that combines the four previously mentioned values. One of the critics of AI is that we don’t know how it works. It arrives at an outcome, but we don’t see how it got there as it is all code. We need to get better as an industry, including Microsoft, on explaining how such process works. Regarding accountability, when something goes wrong with an AI system, it is not acceptable for the developer to deny any responsibility. In a legal standpoint, we need to discuss who is liable for mistakes that the AI has made. I’m not going to suggest a right or wrong answer for that except that a human needs to be accountable as we cannot throw the AI in jail. Therefore, we need to develop a legal system that accommodates such aspects. Q: Some fear that AI will take over jobs. Is there a solution to such concerns? A: It is certain that AI is going to have implications for society. People will be displaced in the economy as it happens with any new technology. With the development of new technology, jobs such as potters have gone obsolete, and that is not a bad thing as the technology has advanced, and society has benefited. However, it is important to think about the people left behind. This criterion is where we require some thought around our social safety and security systems. The society needs to think about what are the skills we need to retrain the people left behind and how do we ensure these people fall out of the economy circle and stay out. The government should think of ways to solve such problems, but I don’t think the government should come up with all the answers on their own. The question needs to be addressed as a joint effort between the government, businesses and the people affected by the change. I think there are things that we can do as a society to make sure that such problems do not happen and AI ends up empowering us rather than having power over us. Q: You mentioned that AI is not a miracle cure. What are the limitations of AI? A: A good example would be that AI works by probability. So it will recognize a pattern, but as of now, I don’t think it is possible for the AI to take in every single variable. We can say through gene analysis that a patient has a higher likelihood of developing cancer, but this does not mean that they will eventually develop cancer. This is because other variables can influence the results that aren’t included in one AI test. So no matter how many data we put into the AI it’s never going to be able to predict a 100 percent case. Such problems are why we need humans to step in and be able to say that although there is a high likelihood, they do not have the disease due to other variables that are counter inductive or request additional tests to confirm the results. Right now, some biases have led some groups to believe that technology has all the answers, but as AI works on probability, we need to human to step in and confirm the results. Q: Is there any other comments that you would like to make to Korean AI developers or doctors who are interested in AI? A: Microsoft is keen on working with AI developers and doctors. The company does not plan to develop an AI solution that is going to take over the entire healthcare industry. Our model is to provide the tools to such developers so that they can develop a solution that the patient or industry needs. Our question to Korean AI developers and doctors is what would you like to do in Korea or export from Korea, and what can Microsoft bring to the table that can help the process.
http://www.koreabiomed.com/news/articleView.html?idxno=4848
April 28th 2021 is World Day for Safety and Health at Work, which began in 2003 and has previously been associated with promoting occupational safety for employees working in environments where they face physical endangerment and risk exposure to harmful substances. Organized by the International Labour Organization (ILO), this year, the ILO has released a report assessing the impact of the pandemic on the workforce and examines how countries can implement resilient occupational safety and health systems that minimize risks for workers in the event of future crises. In addition to assessing how to protect workers from viral transmission, the report includes the rising mental health risks facing workers in digital environments as a result of the pandemic and states, “while teleworking has been essential in limiting the spread of the virus, maintaining jobs and business continuity and giving workers increased flexibility, it has also blurred the lines between work and private life. Sixty-five per cent of enterprises surveyed by the ILO and the G20 OSH Network reported that worker morale has been difficult to sustain while teleworking.” How does operational safety and health relate to online content moderation? Without an army of international content moderators protecting the digital platforms that we have depended on throughout the pandemic for work, groceries, human connection, entertainment and learning, many of these platforms would quickly become too toxic to use. The ILO Flagship Report, ‘World Employment and Social Outlook: the role of digital labour platforms in transforming the world of work’, published in February 2021, drew on research from 12,000 workers around the world and examined the working conditions of digital platform workers in the taxi, food delivery, microtask, and content moderation sectors. The ILO found that there is a growing demand for data labelling and content moderation to enable organizations to meet their corporate social responsibility requirements. Page 121 of the report states, “Some of the companies offering IT-enabled services, such as Accenture, Genpact and Cognizant, have diversified and entered into the content moderation business, hiring university graduates to perform these tasks (Mendonca and Christopher 2018).” “A number of “big tech” companies, such as Facebook, Google and Microsoft, have also started outsourcing content review and moderation, data annotation, image tagging, object labelling and other tasks to BPO companies. Some new BPO companies, such as FS and CO, India, stated in the ILO interviews that content moderation not only provides a business opportunity but also allows them to perform a very important task for society as they “act as a firewall or gatekeeper or a watchdog for the internet.” However, these gatekeepers are becoming increasingly overwhelmed by the sheer volume of content uploaded daily. According to Statista, in a single minute 147,000 images are posted to Facebook, 500 hours of video are uploaded to YouTube and 347,222 stories are posted to Instagram. A small percentage of these images are at best offensive, and at worst abhorrent and risk creating a highly toxic environment that drives law-abiding users off the platforms. As the ILO report found, the largest digital platforms outsource their content moderation to contractors who employ many thousands of content moderators to flag the worst content for removal. Facebook employs 40,000 content moderators worldwide. However, even these dedicated teams struggle to keep up with the sheer volume of uploads. Moderators are suffering post-traumatic stress disorder after being exposed to an overwhelming number of harmful images and videos daily. At Facebook’s scale, even 1% of illegal content demands the removal of 1,470 images a minute, 88,200 images an hour, or 705,600 images in an average 8-hour shift. This is simply impossible for human moderators to cope with. In the age of automation, it’s particularly disheartening to learn that human moderators are expected to work like robots and achieve 98% accuracy by watching a never-ending stream of toxic images. Online safety legislators are also putting pressure on platform providers to swiftly remove unlawful content before they can do harm. In the US, Section 230 of the Communications Decency Act 1996, is currently under review owing to the legal protections that it affords online service providers for the content posted by third parties. Europe and the UK are proposing new laws to make digital platform operators liable for the rapid removal of user-generated content that could be deemed harmful to other site users or the wider public, with penalties for non-compliance ranging from 6% to 10% of global turnover. Germany’s Network Enforcement Act (NEA) requires Social Network Providers, that have more than 2 million registered users in Germany, to remove ‘manifestly unlawful’ content within 24 hours of receiving a complaint. The problem is undeniable, vastly outnumbered by uploaded content, the huge backlogs of toxic material can cause human moderators intense stress and burnout. How AI protects moderators’ mental health Some social media, community and other digital platform providers are turning to artificial intelligence to automate and scale up content moderation to protect users and support their human moderator teams. AI-powered content moderation can remove explicit, abusive, or misleading visual content from digital platforms such as social media and online games. It automatically scans and removes content that has a high-risk score for illegal, offensive, or harmful material, so that it never reaches the moderation queue. Where the content score is ambiguous, AI can flag it for review by a human moderator. Image Analyzer is a member of the Online Safety Tech Industry Association (OSTIA). We hold US and European patents for our AI-powered content moderation technology, Image Analyzer Visual Intelligence Service (IAVIS), which identifies visual risks in milliseconds, with near zero false positives. IAVIS automatically screens out more than 90% of illegal and harmful videos, images and live-streamed footage, leaving only the more nuanced images for human review. IAVIS helps organizations to combat workplace and online harms by automatically categorising and filtering out high-risk-scoring images, videos and live-streamed footage. By automatically removing harmful visual material, it enables organizations to uphold brand values and aids compliance with user safety regulations requiring the timely removal of harmful visual material. Its ability to instantly detect visual threats within newly created images and live-streamed video prevents them from being uploaded to digital platforms where they could cause further harm to other site users, particularly children. Applying AI-powered visual content moderation that is trained to identify specific visual threats, IAVIS gives each piece of content a risk probability score and speeds the review of users’ posts. The technology is designed to constantly improve the accuracy of core visual threat categories, with simple displays to allow moderators to easily interpret threat category labels and probability scores. IAVIS can scale to moderate increasing volumes of visual content, without impacting performance, or user experience. Organizations use IAVIS to automatically moderate previously unseen images, video and live-streamed footage uploaded by people using or misusing their access to digital platforms, so that manifestly illegal content never reaches your site, or your moderation queue. If your organization is striving to maintain worker safety, protect your online community from harmful visual material and comply with impending laws, we are here to help you. To discuss your content moderation needs, please email us, or request a demo. References:
https://www.image-analyzer.com/news-and-blog/ia-blog-how-ai-protects-human-content-moderators-mental-health/
Managing Jive Places 2 Hello and welcome to the Managing Jive Places 2 session. In this session, we will go through the tiles that were not covered in the last session, how to create and manage place templates and banners, content moderation and managing groups. If you are new to the Jive platform and haven’t reviewed the previous sessions, please do so before this one. Also, you will need at least Manage Community access to be able to practice all the activities covered in this session. Tiles First let’s dig into more detail about the tiles that are available to you. Keep in mind that some tiles are only available in the wide format and some are only available in the narrow format. Also, some tiles are only available on the global home page - and some of the tiles are available only on place landing pages. There is a complete matrix of the tiles and where they appear in our nugget tutorials. As a reminder, to edit the landing page, select the page name from the Manage cog. And when you click on Add a tile while in page edit mode, you will launch a tile picker. We will go through the tiles in the order that they appear in the picker, but we will not review the tiles that were covered in the previous session. The first category is Collaboration. - Actions items: will display a list of content from this place that have actions assigned to them. Once an action has been marked resolved, it will disappear from this tile. Useful in collaborative places to keep track of what needs to be worked on - Featured content tile: in places where you are an admin, you will see the option to “Add to Featured Content” in the Actions menu of any piece of content. When you select this, the content title will dynamically appear in this tile on the place landing page. To remove the link from the tile, just click on “remove from featured content” in the Actions menu of the piece of content. This curated tile is good when you want links to be displayed for a long period of time, regardless of other activity going on in the place - Finalized content tile: this tile will dynamically show links to any content that has been marked Final in the place. It is best used in document collaboration groups when you want to note which content is ready to be published elsewhere - Key dates tile: you can add entries about upcoming events manually into this tile. Once the date you enter is past, the entry will disappear. Note: do not confuse this tile with the Upcoming events tile which is outlined later. - Popular content: this is a dynamic tile that will display the content that is getting the most activity (views, comments, likes, etc.) in this place. - Recent decisions: this tile displays links to content which has had a comment or reply marked as a decision. It is best used for collaboration groups to keep track of decisions that have been made. - Upcoming events: this tile dynamically displays a list of links to and the dates of any events that have been created in the place. Once the date has passed, the link to the event will disappear from the tile. Do not confuse this with the Key Dates tile where the items must be manually entered. In the Graphic Elements category: - Carousel tile: showcase images with links. It can be set to show thumbnails and allow for auto-play. - Gauge tile: this tile allows you as the admin to manually adjust the status of a gauge to show how a project or team is doing. - Image gallery tile: showcase images in this tile. Links cannot be added to this tile - it is only for displaying images in a gallery format - Video (external) tile: embed a video from a source like youtube or video without creating a piece of content in Jive. Users can play the video from the tile itself. - Video (featured) tile: From the actions menu in a video that has been uploaded into Jive, select “add to featured content”. The video will dynamically be displayed in this tile - and can be played from within the tile. Lists - custom: - Content sets tile: this tile allows you to create long lists of content links on a landing page, divided into sections. Users can click into each section to display the links. Note: all content must live in the place where this tile is used. When a user clicks into one of the links, they will see links to the previous and next content so they can easily move between the pieces of content. - Expandable sections tile: this tile is almost the same as the content sets tile except that the content can live anywhere in Jive - it is not limited to the place where the tile is put. - Featured people tile: this tile is great for showcasing place owners are subject matter experts on a place landing page. You can add up to 10 people into the tile. The tile displays the avatar, name and title (if the profile field is used). Lists - dynamic - Answered questions tile: this tile shows a list of links only to questions that have a reply marked correct that this in this place - Featured quest tile: with this tile you can display the events of any Rewards quest. One of the nice features of this tile is that it shows a little green check on any event that has been completed by the user so they can keep track of where they are in earning the badge. This tile is only available when Rewards is enabled. - Latest blog posts tile: this tile dynamically displays links to the latest published blog posts in the place. As a new blog post is published, it will occupy the top spot. - Leaderboard tile: this tile displays the number of points the user has earned for activities they’ve done in the place. Under that, a list of the top earners is displayed. This tile is only available when Rewards is enabled. - Similar places tile: this tile allows you to enter a tag or tags - and a link to any place which matches ALL the tags added will be displayed in the tile. Note that if you add more than one tag, the place must have all tags listed - not just one of them. - Tagged content tile: like the similar places tile, but for content only. Adding more than one tag means that the content must have all tags, not just one of them. - Unanswered questions tile: this tile will display links to all questions that do not have a reply marked correct which are posted in this place Support - The tiles in this section have all been previously described. External add-ons: - If you have any created or purchased any custom tiles, they will be displayed here Custom tiles: - Create a content or place tile: this allows you to create a tile that can be used in multiple places - based on the key content and places tile. What you do is add links to content and/or places in this tile, and then specify if you want to lock the tile so that only you control what is in it and if you want others to be able to also use it for their places. Here is an example of how to use this tile: say you have a series of projects that all should have the same list of links to project management guidelines that live in a common PMO space. You can create this tile with the links and have everyone who is going to be creating one of the project places add this tile to their landing page so that they are all consistent. - Create a people tile: this tile operates on the same principle as the “create a content tile” except that you add people instead of content or places These are tiles that are only available on the global home page: - Hero tile: this tile is only available in the top tile area that expands across the width of the page. You can add an image, text and button content to link to a piece of content or a place (or even an external link) from the button. It is an easy way to create an attractive call to action without having to know html. Select an image that is at least 1200px wide. - News: displays links to blog posts that are part of a push news stream. Note: the contents of this tile may be redundant with what is in the news streams - Trending content: displays links to content that has a lot of activity from all over the community. This will be similar to what appears in the Top & Trending cards in the dynamic area of the home page - Trending people: displays the avatars and names of people who are active - and have a lot of activity around their activities (comments, likes, shares on content created by these people, etc). and these are tiles that are not available on the global home page: - Video (featured) - Featured content - Content sets - Unanswered questions - Answered questions - Popular content A word about custom tiles: it is possible to bring content from other systems into custom tiles using APIs or create types of functionality in Jive using custom tiles. For now, consider that custom tiles are only available on the desktop and mobile browser versions of Jive - not in the Jive Daily app. [snippet break] Now let’s dive a bit more into the place banner templates and place templates described in previous sessions. Place banner templatesFirst we’ll change the place banners for the existing place templates and the banner presets for the entire community. Using the dropdown next to your avatar, select admin console. From there, you’ll need to access the advanced admin console by clicking Advanced Settings on the bottom left. Then click on the Add-ons navigation item. In the left navigation, click on Place Template Management and at the bottom of that page, click on Manage Template Banners. Once on the page, you’ll see that the top item is the default team collaboration banner. Down below are listed the rest of the out-of-the-box place template banner thumbnails. On this page, you can change the banner image for each of these - if you are going to use them. Once you create your own place templates, they will also appear in this list. Simply click the Edit link on any of the thumbnails to launch the configuration popup. In the popup, you can change the title color and the background color if you want to use a solid color. Use the hex code for the color you wish to use. If you do not want to use a background image, first click the “remove” link next to the reference to the current image file. If you want to use a different background image, you can upload it by clicking the “change” link which will launch a picker. Reminder: the optimal image size if you want to use one image is 1200px X 150px. Otherwise, you can set it so that it repeats using the background repeat section. If you want to update any existing places that are using this template, tick the Update existing usages box. Ticking this box will not update the banners for anyone who has already changed the banner in their place. Finally, click the Save button. To change the presets that users can select from, click the tab at the top that says Place Banner Presets. The configuration on this page works a bit differently. Instead of editing the existing banners, you can delete the presets and add your own new ones using the same technique just shown. To change the order of how they appear on the front-end, just drag and drop the thumbnails. At this time, there is no way to include a place icon in these templates - each place must have it’s own icon uploaded individually. [snippet break] Anyone can create personal activity page templates from a place they own - and as someone with at least Manage Community access you can make community-wide templates that are available for everyone. To clarify: place templates can only be applied to the activity page - not custom tile pages. Prior planning is useful so that you can decide if the existing place template categories are relevant to the templates you will create - or if you would like to create your own. The categories can be managed from the same page we were on before: advanced admin console > add-ons > place template management. The existing categories can be renamed or completely hidden. Hiding the categories will also hide the templates themselves from the users. However, some of the templates appear in more than one category. Now that you have organised the categories that you want to be available to users, let’s move to the front-end to see how to manage the templates themselves. If you do not already have a browser tab open with the front-end of Jive open, click the View Site link in the top right of the admin console page. Using the method of your choice, arrive at a place and select Settings from the manage cog. Click on Browse Templates to see the template picker. Then click on one of the templates in the middle and observe how the Details pane changes to show the tiles that are included in that template. Then click Apply Template to see how the page itself changes to include the new tiles (note: if you are doing this in a place where you like the current layout, be sure to click Cancel instead of Save!). It is not possible to change the existing templates. But you can deactivate them and create your own. You can see a link that says “deactivate this template” on each template. Only admins can see this link. If there are individual templates that you want to deactivate, you can do it this way. Important: it is not possible to deactivate or replace the default Team Collaboration. To create a template, add the tiles to the activity page of a place the way you want them to be. You do not need to add any content into the tiles as this won’t copy anyway. The exception is if you created any custom tiles with links to people, places or content which you want to be included in this template. The links in these tiles will be the same in any place where the template is applied. The other item that will be included in the template is the place banner. You can use one of the presets or put your own image or color in the banner of this place and it will become part of the template. Note: any place icon that you add will not become part of the template. Once you have the page the way you like it, save it. Then from the manage cog, you can select either “Save as new template” for a personal template, or “Save as community template” to create one for everyone to use. ou must be on the activity page to see these items in the manage cog. If you don’t see them, make sure you are on the activity page. Either choice will launch a popup where you can name your template and add a description. When creating a community template, you must select a category for it to live in. Adding a tag into the template will ensure that all places that are created using this template will have the correct tag. Once you save the community template, anyone can then use it for their place. Not everything in the configuration of the template will be part of the template: - Images in any tiles like the carousel or banner tile - Content configurations in tiles like “Key content and places” or the document viewer tile - Categories that have been created in the place - Place navigation items that have been hidden - Place icon - Content that lives in the place An important point to keep in mind: When you create a template, you are the only person who can then deactivate or delete the community-wide template. It is a good idea to use a “god” account (covered in the advanced community management session) to create community-wide templates. To access the template management area, the only way is to either create a new place or go to the Settings page of an existing one. Personal templates can be found in the Your Templates category. And once a template is created, it cannot be edited. You must either delete it or deactivate it and create a new one. Only custom templates can be deleted. The defaults can only be deactivated. [snippet break] Let’s finish this section with a couple of other advanced place management topics: - Static resources: Jive has what is called a “statics” folder which is where images, css and js files can be stored for use all over the community. Each place has its own listing, but in reality, an image that is uploaded in one place can be used in another place. Storing images and resources in the statics folder can help keep page load times down. Resources that live in this folder are available to anyone who knows the URL, so don’t upload anything sensitive.To do this, you can either select Static Resources from the manage cog in any place, or Manage Files in an html tile. Once in the popup, you can upload your file and then copy the URL. If there are already resources there, you will see them listed. Once you copy the URL you can insert it into any place where html is used.A caveat: you cannot use this technique to add images to tiles where an uploaded file is necessary - such as the banner tile which requires an image to be uploaded. However, here is an expert tip: upload any icons you want to use for the helpful links tile into the statics folder so that you can use them in the tile. You can also use this to upload a logo to use in the emails that are sent from Jive (we will cover this in more detail in the advanced community management sessions) - Blog management: as a place owner, you can manage the publishing of any blogs in the place, including ones that are scheduled to be published, in draft form (including ones that have been given a publishing date in the future), or awaiting moderation by using the Blog function from the manage cog. This can be useful when other authors have created the blogs - making them difficult to find using other methods. - The first page is the Overview page - this lists the most recent posts. - Clicking View All or Posts to the right will display the complete list, along with the publishing status, tags and the date that the blog was published. You can tick the boxes on the right to select blogs that you want to publish or delete from this page - Options: this is a rarely used function which is deprecated so we won’t review this - Import: it is possible to import content in from another blog platform using this page. - Under the View section, Blog will take you to the actual blog which shows the posts in a list format - Both the RSS feed links are deprecated so we won’t review them. [snippet break] Content ModerationIn a previous session, we reviewed a way to add moderators to individual documents. Content can be placed into moderation before publishing at the space level, too. You can set up moderation in individual spaces and for individual content types, including comments and replies. You can also set up different moderators for each space. Before you can set up moderation in a space, you’ll need to assign the Moderation permission to a user permission group or by using a user override. As a reminder, this is most easily done from the admin console by clicking the Structure Your Community button and then using the cog to edit the permissions of the space(s) where you want to turn moderation on. Once you’ve done this step, we’ll need to use the advanced admin console to set up the actual moderation. The advanced admin console is reachable from the Advanced Settings link at the bottom left of any page in the basic admin console. Once in the advanced console: - Click the Spaces link in the top navigation. - From here, click on the Settings tab. - And then click the Moderation Settings link in the left navigation. You will notice that the page loads with the root, or default, space name in the title with a link that says “change space”. Underneath is a list of all the different content types that can be moderated. (note: If you were to set up moderation at this level, personal content and content in social groups would be moderated. We don’t recommend moderating at this level, but for certain use cases, it may be necessary so we mention it.) Click the “change space” link to launch the space chooser and then select the space where you want to enable moderation. If you have not configured space moderators, you will see a red reminder message and you will be asked to specify a moderator before you can save your moderation settings. Select the types of content you want to moderation and save. You will need to do this separately for every space where moderation should be enabled - there is no inheritance function for moderation like there is for other space permissions. Now that we have moderation set up, let’s look at how the process works on the front-end: - A user with Create access (including admins) creates the content. A yellow note warning them that this content will be placed into moderation is displayed. - Once the content is published, a note warning the author that the content has been placed in moderation appears. - The author can continue to edit the content while it is in moderation. - The moderator(s) for the space receive a notification that there is content waiting in the moderation queue - A new navigation item on the Jive Inbox page called Moderation is now available to the moderator(s). When they click into this, they see a list of items waiting for them in their moderation queue. - From this queue, there are several ways to review the content: - Clicking into the title of the content will display the contents directly in the queue. - Clicking the “view in context” link will display the piece of content - Clicking “edit in context” will display the piece of content in edit mode so that the moderator can make changes - Adding a note will allow the moderator to create a note about the piece of content for later reference. Although, once the content has been approved or rejected, there is no way to access these notes. They are most useful when there is more than one moderator for the content and they want to share notes with each other. - The moderator can then approve or reject the content item by selecting one or the other in the dropdown to the right of the content. The author will receive an email notification in either case. - When the content is approved, it will be published. - When content is rejected, it goes back into a draft state accessible by the author. Some advanced options exist for moderators - these are useful when there are a lot of items to be moderated: - Items in the queue can be filtered by publishing place, content type, moderation type (the second option - reported abuse - will be covered next), and user name. - To bulk approve or reject, use the dropdown on the right above the individual selector to choose Approve All or Reject All. And a final note: when there is more than one moderator for a space, only one of them needs to approve the content before it is published. [snippet break] Report abuse is a post-publishing moderation process for users to report content, including replies and comments, that they feel is inappropriate. When it is enabled, there is a “report abuse” item in the actions menu of every piece of content and every comment or reply community-wide. When a user clicks the link, they are required to select a category of abuse from a dropdown and they have the option of adding a reason why they feel it is abusive. Depending on the setting in the admin console, this content may be removed into moderation right away - or it might stay published until more users have reported it. Abuse reporting can only be enabled at the root level. To enable abuse reporting, stay in the advanced admin console. If you are still in the moderation settings section, you will see the link just below it on the left. Otherwise, you will find it under Spaces > Settings. When you click into the page, there is the option to enable it. Under that is a field where you can set how many times the piece of content has to be reported as abusive before it goes into the moderation queue. The default is set to 5, but we recommend that you set it at 1 or 2. When the content goes into moderation in a space where there are moderators specified, it will appear in their moderation queue. If there is no moderator at the space level, or the content is from someone’s personal content, the item goes into the queue of users with manage community or higher system access. Now let’s shift to social group management. As a reminder, there are 4 main types of groups: open (or public), restricted, private and unlisted. The default setting is to allow all users to create all types of social groups. But if you decide that you need more control over group creation in your community, you have several options. Start by navigating to the Permissions page in the admin console and then Social Group Permissions. You’ll see that All Registered Users has a full set of permissions and the Everyone group can View. (As a reminder, you can just remove the Everyone group since it is not needed). To change the permissions for all users (except system admins), click the Edit Permissions link next to the name. In the popup, you’ll see the options that can be configured: - View social group: this means that users can view everything in open and restricted groups and can see the names of private groups (but not the content unless they are a member). This also means that users can participate equally in open groups and create discussions and questions in restricted groups. - Create group (public): this allows users to create open/public and restricted groups + everything in view access even if View access is unticked. But they cannot create private/unlisted groups. - Create group (private): this allows users to create private and unlisted groups + everything in view access even if View access is unticked. But they cannot create public/restricted groups - Manage social group: this permission is meant to allow certain users the ability to manage any group even if they are not the group admin, but in reality, the description is misleading. This setting gives a user the permission to see anything in any group - even unlisted groups. We do not recommend using this permission - as Full Access admins also have this ability. - Create externally accessible groups: this allows users to create groups which can be used for sharing and collaborating with users outside of your organisation. External groups must be either private or unlisted. More about this shortly. Additional user permission groups and user overrides can be added using the same techniques as for space permissions. [snippet break] External contributors and externally accessible groups Jive has the ability to give limited access to users outside of your corporate network. This user status is called “external contributor”. Users with this designation can only see content that lives in the externally accessible private or unlisted groups they are members of - and the user profiles of the other members of these groups. Groups that are designated as externally accessible have an orange marker wherever they are visible. All content that lives in these groups also has an orange marker and a note that it is accessible to external contributors. We will cover these users in more detail in the advanced community management session but for now it is important to understand that only groups can be used for this special access, not spaces. Non-member content editing This feature allows certain content that lives in a private group to be shared with individuals who are not members. An example of how this can be used might be when you and your team members are working on a series of documents together in a private group. You are ready to have someone from Legal review one of them, but not the other documents - so you don’t want to invite this person to join the group. You can share the individual document with the person from Legal using the Share function at the top of the document. After you have selected the user(s) from the share field, you will be asked if you want to allow them access to the content or if you want a PDF sent to them. Once the content has been shared, it will have an orange marker and note that the content has been shared with someone outside of the group. Non-member content editing can be enabled individually in the Settings menu of private groups. But for it to be an option, it must also be enabled at the community-wide level in the advanced admin console > system > settings > non-member content editing. Place categories The final item we will cover in this session are place categories. Similar to content categories, place categories offer a way to filter for browsing users. Examples might be location, department, communities of practice, etc. When place categories exist, they become one of the filters in the main places page and the page URL can be used for other navigational elements. Place categories are not searchable. To configure them, you must also be in the advanced admin console > system > settings > place categories. Once they have been created, they will appear as check-boxes in the Settings popup of any place. A place can be included in more than one place category. Thanks for attending this session! In the next session, we will do an in-depth tour of the admin console and advanced settings for your community.
https://community.aurea.com/videos/14430
WhatsApp suspended over 3 million Indian accounts from its platform between June and July 2021, a new compliance report from the social media platform reveals. Facebook, Instagram, and WhatsApp all released their monthly compliance reports on August 31, under Information Technology Regulations (Intermediary Guidelines and Digital Media Code of Ethics), 2021. Total, the reports show an increase in the amount of content which the platforms reported and took action against (via their automated systems) and the number of user complaints received compared to their previous report for May and June 2021. These monthly reports show that major social media platforms like Facebook and WhatsApp try to comply with the requirements of the IT rules, as this could lead to platforms losing immunity under the IT Act 2000. Content moderation through WhatsApp The 3 million accounts WhatsApp blocked from its platform were done through automatic detection and in-app reports from its users. However, that number, according to WhatsApp, does not include the number of user complaints received via email and email from WhatsApp Complaints Officer in India, Paresh B.Lal. WhatsApp received 594 complaints from users about account support, objection to suspension, product support, security and others. Of the 594 complaints, 74 actions have been taken, which the report said could mean blocking an account or restoring a previously blocked account. The rest of the 490 were not prosecuted for one or more of the following reasons: - The user needed help to access their account - The user needed assistance to use any of WhatsApp’s features - The user wrote to give feedback - The user requested the recovery of a locked account and the request was denied. - The reported account did not violate Indian law or WhatsApp Terms of Service Content moderation through Facebook and Instagram Facebook and Instagram have jointly disclosed their user inquiries and content action numbers in another report. In almost all cases, the two had identified and acted against a larger number of problematic content compared to their previous report, and had received a high number of user complaints. Content edited by Facebook Here is a A breakdown of all problematic content that Facebook has taken action against, along with the percentage of such content reported by its automated systems: Content edited by Instagram Unlike Facebook, Instagram doesn’t yet have a metric for spam content. Here is a breakdown of all of the other content reviewed: Source: Facebook’s monthly compliance report User complaints received on Facebook In total, Facebook said it had received 1,504 complaints from users via the contact form on its website and its complaints officer in India, Spoorthi Priya. All of these were responded, however, in 1,326 cases, follow-up measures were taken to resolve them, including providing tools for the user to report content for specific violations, self-remediation processes where they can download their data, ways to remediate of problems with hacked accounts, etc. The remaining 178 complaints were subject to a special review by Facebook, which resulted in 44 complaints being dealt with. These actions include removing a post, covering it with an alert, or disabling the account. Here is a breakdown by subject of these complaints: User complaints received from Instagram Instagram received 265 complaints from users – a sharp increase from the 36 complaints noted in its previous report. Of these, 181 users were provided with tools to resolve the issue, while the remaining 84 underwent specialized review and action was taken against 18. What the IT rules 2021 require The IT rules require social media intermediaries to: - Proactively identify and remove content: This includes content moderation (through automated mechanisms) of posts that are defamatory, obscene, pornographic, pedophile, invasive, offensive or harassing in terms of gender and in other ways. - Publish regular compliance reports: These reports should be published every month and include details of complaints received, actions taken and “other relevant information”. - Appoint key leadership roles: Major social media intermediaries (with more than 50 registered Lakh users) must appoint a Chief Compliance Officer, Node Contact Person, and Resident Complaints Officer, all of whom must be India based and employees of the platform. - Deactivate content within 36 hours of an official order: The rules also require intermediaries to provide identity verification information or assist a government agency with crime prevention and investigation no later than 72 hours after receiving a legitimate order. They must also keep a record of disabled content for 180 days. Also read: Do you have anything to add? Post your comment and give someone a MediaNama as a gift subscription.
https://elitmuspreparation.com/whatsapp-facebook-and-instagram-publish-compliance-reports-in-accordance-with-it-rules/
The international organization Forum on Information and Democracy (FID) laid out policy recommendations for states and tech companies on how to stop information chaos, protect democracies, and uphold human rights worldwide. In a 128-page report, they identified 4 structural challenges and proposed concrete solutions for each of the following: - platform transparency - content moderation - promotion of reliable news and information - private messaging services The report, which detailed 12 main recommendations and a total of 250 proposals, was produced by a team of rapporteurs and the FID working group on infodemics, co-chaired by Rappler CEO Maria Ressa and former member of the EU Parliament Marietje Schaake. Christophe Deloire, the forum’s chairperson, said “a structural solution is possible to end the informational chaos that poses a vital threat to democracies.” “The exercise of human rights presupposes that democratic systems impose rules on the entities that create the standards and the architectures of choice in the digital space,” Deloire said. “Social media, once an enabler, is now the destroyer, building division—‘us against them’ thinking— into the design of their platforms…. It’s time to end the whack-a-mole approach of the technology platforms to fix what they have broken,” Ressa said. “The past years have offered a wake-up call for those who needed it….Without explicit and enforceable safeguards, the technologies promised to advance democracy will prove to be the ones that undermine it. It is now vital that democracy is made more resilient,” said Schaake. Since 2019, at least 37 countries, mostly from Europe, have signed the FID- and Reporters Without Borders-led International Partnership on Information and Democracy, which calls on platforms to uphold their responsibilities. The nations also vowed to ensure their legislations and policies promote a healthy digital space that “fosters access to reliable information” and uphold freedom of expression. In Asia, only India and South Korea have so far signed the declaration. In summary, here are my 4 key takeaways from the report: The need for a human rights-centered approach to tech The UN Guiding Principles on Business and Human Rights (UNGPs) impose on business enterprises the responsibility to respect human rights, including but not limited to the right to freedom of expression and information, in places where they operate. This is especially applicable to platforms and their business models. Coined in 2014 by American author and scholar Shoshana Zuboff, the term surveillance capitalism describes the business model predicated on harvesting user experience through online platforms, smartphones, apps, and other devices, and manipulating behavior for monetization. (READ: What you need to know about surveillance capitalism) The issue of human rights is also relevant when it comes to platforms’ content moderation. At present, platforms can arbitrarily impose policies that are not in sync with international human rights law, and are under no regulatory body to check on them. Disinformation and misinformation propagate lies and incite hate and conflict against individuals and groups of people. International human rights law, the report argued, could provide a universal framework for defining a problematic content and addressing it. (READ: With anti-terror law, police-sponsored hate and disinformation even more dangerous) Transparency Platforms should be transparent to users, vetted researchers, civil society, and regulators about its algorithm, content moderation, policies, terms and conditions, content targeting, and social influence building – functions that affect how the public view the world and process information. Transparency is also needed to determine if platforms are abiding by their own policies and responsibilities. The information provided by platforms must also be open to audit by regulators and vetted researchers to ensure companies are operating as intended. The participation of civil society would also be critical here. The platforms should also be upfront when it comes to their conflicts of interest, in order to stop commercial and political interests from influencing the information space. However, to prevent a repeat of the Facebook data misuse of Cambridge Analytica, the report suggested “differential privacy” as an option, wherein confidential data are made widely available for accurate data analysis. “Differential privacy addresses the paradox of learning nothing about an individual while learning useful information about a population,” the report said. Regulation To ensure platforms are abiding by their own policies and transparency requirements, the group proposed public regulation. As starting point, the report suggested the transparency regulation models for Europe and some of the current proposals in the United States. But when it comes to content moderation, the group cautioned against public regulation for fear it might lead to censorship. Seemingly recognizing at-risk nations like the Philippines, the report said “government demands can be as problematic as company policies in some jurisdictions.” Regulators, too, are not fool-proof. The report suggested that there should be democratic safeguards against potential abuses or malpractices of governments and the regulators themselves. These include making available to the public the number and nature of personal data they requested from companies and the content they sought to be taken down, among others. The organization said it can play a leading role in inventing new models of public regulation and co-regulation for the much-needed global governance framework. Accountability The report proposed a legally binding transparency to solve online content moderation and disinformation issues. While they admit it won’t end all the problems, “it is a necessary condition to develop a more balanced equilibrium of power” between the platforms and democratic societies. After all, it is just commensurate with the power that platforms hold over the information ecosystems. Proposed sanctions for non-compliance range from large fines, publicity, to administrative penalties. The report also proposed the creation of a Digital Standards Enforcement Agency to enforce safety and quality standards in the digital sphere. Some of its proposed powers include the authority to prosecute non-compliant offenders; enforce professional standards in software engineering, as these engineers are the platform builders; and non-compliance orders, among others. This could be a contentious issue but the organization said it could launch a feasibility study on the implementation of the agency. Here are the 12 main recommendations of the working group to states and tech companies: Public regulation to impose transparency requirements on platforms - Transparency requirements should relate to all platforms’ core functions in the public information ecosystem: content moderation, content ranking, content targeting, and social influence building. - Regulators in charge of enforcing transparency requirements should have strong democratic oversight and audit processes. - Sanctions for non-compliance could include large fines, mandatory publicity in the form of banners, liability of the CEO, and administrative sanctions such as closing access to a country’s market. A new set of baseline principles on content moderation - Platforms should follow a set of Human Rights Principles for Content Moderation based on international human rights law - Platforms should assume the same kinds of obligation in terms of pluralism that broadcasters have, like the voluntary fairness doctrine - Platforms should expand the number of moderators and spend a minimal percentage of their income to improve quality of content review, and particularly, in at-risk countries. New approaches to the design of platforms - A Digital Standards Enforcement Agency to enforce safety and quality standards of digital architecture and software engineering. FID could launch a feasibility study on how such an agency would operate. - Conflicts of interests of platforms should be prohibited to avoid the information and communication space being governed or influenced by commercial, political or any other interests. - A co-regulatory framework for the promotion of public interest journalistic contents should be defined, based on self-regulatory standards such as the Journalism Trust Initiative; use of friction to slow down the spread of potentially harmful viral content should be added. (READ: Increasing sharing friction, trust, and safety spending may be key Facebook fixes) Safeguards should be established in closed messaging services when they enter into a public space logic. - Limit some of the functions to curb the virality of misleading content; impose opt-in features to receive group messages and measures to combat bulk messaging and automated behavior - Platforms should inform users on the origin of the messages they received, especially those that have been forwarded - Platforms should reinforce notification mechanisms of illegal content by users, as well as appeal mechanisms for users that were banned – Rappler.com Add a comment There are no comments yet. Add your comment to start the conversation.
https://www.rappler.com/technology/features/experts-cite-structural-solutions-online-information-chaos/
The article has been authored by Ms. Shweta Mohandas and Ms. Torsha Sarkar, Policy Officers at the Centre for Internet & Society. Introduction With the amount of texts, images, videos being uploaded on the internet increasing each day, social media companies, internet giants and even law enforcement agencies are looking at technologies such as Artificial Intelligence (AI) to filter through this content. However, there continues to be a significant lack of definitional clarity regarding what is meant by AI, especially in the context of filtering, moderating and blocking content on the internet. Simply put, AI and its related tools, encompass a broad variety of technical and algorithmic technologies that internet companies have been increasingly relying on to keep their platforms free from a swathe of objectionable content, which has included, revenge porn, extremism and child sexual abuse imagery (CSAM) and copyright infringing works. There are several manifestations of this sort of technology, each with their own set of advantages and disadvantages. For instance, hash-matching is a popular method by which certain types of objectionable content can be flagged. In this technique, a piece of objectionable content is denoted by a hash, which is essentially the numerical representation of the content, and comparatively smaller in size. Once a piece of content is flagged as objectionable, it will be tagged with its hash, and entered into a database of other known objectionable content. Any future uploads of the same content would be automatically matched against this database, and flagged. This has its advantages, since the smaller size of the hash makes it easier to maintain the database, as opposed to a database of the original file. On the other hand, the nature of the technology makes it impossible to be administered against new content, since no corresponding hashes would exist in the database. Another instance is the use of Digital Rights Management (DRM) systems for enforcing copyright infringement, which uses a number of ways to flag and remove infringing content. For example, Youtube’s digital fingerprinting locates user generated videos that are potentially infringing known copyright works. The digital fingerprinting system that Youtube uses, then identifies a match between a reference file and another video and automatically monetizes (transfer the ad revenue that the infringing user would get to the copyright owner), blocks, or tracks the allegedly infringing video for the individual who provided the reference file. However, several problems with the use of AI technologies prevail, including biases and the failure to understand nuance or context. For example, the AI software being designed by the London’s Metropolitan Police to detect images of child abuse, kept flagging images of sand and desserts as inappropriate. Another app kept flagging photos of dogs or doughnuts as nude images. Even when it comes to copyright filtering and protection a number of these AI systems flag content by the same creator, licensed works or even works under fair use as copyright violations. Currently, no filtering and moderating systems are entirely dependent on the deployment of AI technologies. Rather, as has been documented by several researchers (and by way of admittance of the companies themselves), platforms use a combination of human moderators along with these tools, to carry out moderation decisions. There is a considerable lack of transparency from these platforms, however, regarding the extent to which these tools are used to supplement human moderation decisions, as the subsequent questions would demonstrate. A balancing act Advantages of using AI Improving the efficiency of content moderation There is a particular reason for the widespread nudge towards adoption of AI for content moderation. The scale of content, including a vast amount of objectionable content, being uploaded on the internet is arguably more than what a team of human moderators can flag and takedown. AI and its related technologies, therefore, promise a sort of scalability of adoption – that is, the potential of being adopted at a large scale to match the volume of content being uploaded online. This, in turn, makes it an efficient alternative, or a supplementary option for human moderators. For instance, natural language processing (NLP) is another manifestation of how AI technologies can be used in content moderation. In this process, the system is trained to parse text, with the aim of discerning whether the text is either negative or positive. In the context of content moderation, therefore, a NLP system can be trained to understand whether a particular piece of ‘speech’ belongs to a given class of ‘illegal’ content or not. One of the primary advantages of a NLP system is their scalability, which makes it a useful tool to be deployed for the purposes of filtration in social media platforms. Reducing the trauma of human moderators Investigations in the past have revealed that the task of human moderation is often outsourced by online companies to third-party firms, and the moderators themselves are forced to work in inhospitable conditions. Additionally, at the cost of reviewing and flagging ‘improper’ content, in order to keep it out from view of the users, these moderators are forced to be exposed to swathes of violent and abusive content, leading to long term emotional, psychological and mental trauma and in some cases, development of PTSD. In light of the same, it has been argued that the utilization of AI technologies could hold the possibility of reducing levels of exposure to violent or traumatic content online. One way of doing so, as the British regulator Ofcom suggests, is by way of object detection and scene understanding techniques, which would hide the most damaging areas of a flagged content from primary view of the moderators. According to this technique, “If further information is required, the harmful areas be gradually revealed until sufficient evidence is visible to determine if the content should be removed or not.” Disadvantages of using AI AI in copyright management Though the use of DRM Software, such as the one used by Youtube for copyright enforcement, is helping copyright owners easily take down pirated versions or copyright infringing materials, it also fails in a number of instances. These include removing content that has been legally licensed, posted by the copyright owner or falls under fair use. One such example is that of the video stream of Neil Gaiman’s acceptance speech at the Hugo Awards being interrupted due to fact that the software flagged the images that were from the show Doctor Who as copyright infringing. These clips triggered the DRM software used by Ustream.com, the website responsible for carrying the Hugo Awards stream. However the organisers had attained the license to use the images. This, therefore, prevented a number of people from viewing the acceptance speech online. Another example is that of a video of professor Lawrence Lessig’s lecture taken down by Youtube as it included five extracts from a song the copyright of which was owned by Liberation Music. Later, Lessig filed his own copyright complaint seeking declaratory judgement of fair use and damages for misrepresentation. The lawsuit was eventually settled in February 2014 with Liberation agreeing to pay damages. These are just a few examples of the number of instances where DRM systems have unfairly removed content. The issue with the use of AI or other forms of DRM software to remove content is the speed at which this happens, and the difficulty in finding out the exact reason for the take down. Lessig’s example also shows how tedious the process of counter notice is even for a person with prominence and expertise. AI, accuracy and bias The other major area of concern that the use of automated tools gives rise to is the question of inherent biases being embedded in the technological system, leading to instances of inaccurate moderation and undue censorship. Supervised learning systems are one of the methods by which content moderation is carried out, and are trained by the use of labelled datasets. This means that the system is taught that if in ten instances an input X would yield the output Y, then the eleventh time the system encounters X, it would automatically give the output Y. Translated into the content moderation domain, if the supervised learning system is taught nudity is bad and should be taken down, then the next time it encounters nudity, it would automatically flag it. This is similar to the earlier mentioned report of sands and deserts being flagged as nudes. However, the process is not as simple as it sounds. For a large swathe of content online, numerous contextual cues may act as a mitigating factor, none of which an automated system would be expected to understand at this juncture. As a result of which, in the past, AI tools have been found to flag posts by journalists and human rights activists in which they had attempted to archive instances of humanitarian violence. Additionally, the training datasets fed into the development of the system may also display the bias of the developer, or embed inherent, unintentional values within the dataset itself. For instance, a study of Google’s facial recognition algorithms that tagged photos to describe their content, accidentally tagged a Black man as a Gorilla. This reflects a two-fold bias in the development of the algorithm itself – one, that the training dataset on which the system was fed did not have the requisite diversity and two, the development team did not have enough diverse representation to flag the lack of diverse dataset and approaches. Recommendations With the steady flow of content, defined as legal or illegal, being viewed, sent and received by the internet users, governments and companies alike want to filter and moderate the same. However using technologies such as AI for content moderation creates, along with increased surveillance, particular issues around unfair content flagging and removal, without useful means of counter notice. This is even more exasperated by the power difference between an individual and the corporation or government that is using the software. Some of the ways in which the use of these systems can be fairer to the users would be to have a human intervention when a counter notice of appeal is sent. The human could examine the defence mentioned in the counter notice and decide the decision was fair or not. Another way of addressing the above-mentioned flaws with the use of automated tools, whilst reconciling with the beneficial use-case of the same, is to demand better transparency standards from companies, governments or any other entities using such technology. Most internet companies publish regular transparency reports documenting various facets of their content moderation practices. This can include for instance, informing users how many pieces of content were taken down on grounds of being ‘hate speech’, ‘extremism’ or ‘bullying’. Such reports can, accordingly be extended to include more information around how automated filtering, flagging and blocking works. YouTube (through Google), for instance, has begun to include data on automated flagging of Covid-19 related misleading information. Additionally, it is also recommended that these reports include more qualitative information about the kinds of technology adopted. As discussed in the previous sections, AI in content moderation encompasses a broad variety of technological tools, each with their own advantages and disadvantages, and there continues to exist plenty of opacity regarding how each of these tools are administered. More disclosure by internet companies and governments alike regarding the nature of technological tools they envisage to be used, coupled with quantitative information around its enforcement, informs both users and researchers about the efficacy of these tools, allowing for better decision-making processes. This also addresses the aforementioned power difference between the individual and corporation, by lessening the information asymmetry.
https://mcalaw.in/?p=114
The Current website is an emerging and open knowledge base created and curated by its community of members. We gather resources, collections, reflections, inquiries, and stories about what it means to teach writing in our digital and interconnected world. The Current has a Creative Commons Attribution-Sharealike 4.0 International license in order to support creative sharing and distribution of content. We are influenced by the idea of creating community through the establishment of a shared commons of work as well as through conversations with colleagues at Creative Commons and P2PU. 1. Disclaimer Content on this Site, including profiles, resources, collections, discussions and feedback, express the research and opinions of the Service participants, and do not necessarily reflect the views of NWP. The editors of this Service do not typically edit participants’ posts or other content and cannot confirm the accuracy or reliability of their submissions. Each participant is solely responsible for the information, analysis and/or recommendations contained in their submitted content. 2. Moderation Policy NWP is hosting these Services to help build the field of digital writing, teaching and learning. We intend for comments and content posted in this forum to be individually authored, collegial, thoughtful, and intellectually challenging or provocative—and also to meet basic standards of respect and accuracy in tone and content. Posted content and comments on this Site are subject to review and moderation by NWP to maintain the respect required for this forum to remain productive and respectful. Participants may also flag comments or content for administrative NWP review. Suggested guidelines are provided for both posting and responding to content within this site. We encourage the use of these guidelines in helping to establish shared practices and collegial dialogue within The Current. Voicing differences of opinion and providing countervailing information or analysis within the context of the site’s intent are welcomed—however, hate speech and attacks of a personal nature will not be tolerated. NWP maintains the right to edit or remove content or comments posted at our discretion if said content or comments are unrelated to the general area of interest of the website or comment is unrelated to the post to which it is responding; if content or comments are advertising products, services, or websites rather than making points; or if contents or comments include racial, gender-based, or sexualized slurs or attacks, aggressive harassment or abuse of content authors or fellow site members, or threats of violence. 4. Conduct Posting You may not use the NWP name to endorse or promote any product, opinion, cause, or political candidate. Representation of your personal opinions as institutionally endorsed by NWP or any of its writing project sites is strictly prohibited. By posting any content, you warrant and represent that you either own or otherwise control all of the rights to that content, including, without limitation, all the rights necessary for you to provide, post, upload, input, or submit the content, or that your use of the content is a protected fair use. You agree that you will not knowingly and with intent to defraud provide material and misleading false information. You represent and warrant also that the content you supply does not violate these Terms and that you will indemnify and hold NWP harmless for any and all claims resulting from content you supply. When posting any content that is created by or features students under the age of 18, you also warrant and represent that you have all the necessary permissions required by your district or institution’s acceptable use policy and that you are in compliance with protections afforded minors as per U.S. law. Furthermore, you warrant and represent that you have protected the privacy of minors in your content, including removing student names and other identifiers as per U. S. Law (i.e. CIPA/COPA) and your institutions acceptable use policy. You acknowledge that NWP does not pre-screen or regularly review posted content but that it shall have the right to remove in its sole discretion any content that it considers in violation of these Terms. Accessing You understand that all content posted to the Site is the sole responsibility of the individual who originally posted the content. You understand, also, that all opinions expressed by users of this site are expressed strictly in their individual capacities and not as representatives of NWP or their writing project sites. You agree that NWP or affiliated writing project sites will not be liable, under any circumstances and in any way, for any errors or omissions, loss or damage of any kind incurred as a result of use of any content posted on this site. You agree that you must evaluate and bear all risks associated with the use of any content, including any reliance on the accuracy, completeness, or usefulness of such content. Children Collecting personal information from children under the age of thirteen is prohibited. No Content should be directed toward such children without the express written permission of the NWP. 5. Disclaimer of Warranties and Limitation of Liability This site is provided on an “as is” and “as available” basis. NWP makes no representations or warranties of any kind, expressed or implied, as to the site’s operation or the information, content or materials included on this site. To the full extent permissible by applicable law, NWP hereby disclaims all warranties, express or implied, including but not limited to implied warranties of merchantability and fitness for any particular purpose. NWP will not be liable for any damages of any kind arising from the use of or inability to use this site. You expressly agree that you use this site solely at your own risk. 8. Copyright Complaints NWP respects the intellectual property of others, and requires that our users do the same. If you believe that your work has been copied and is accessible on this site in a way that constitutes copyright infringement, or that your intellectual property rights have been otherwise violated, please contact NWP to report copyright infringements. Opting In/Out of Email The Current members can individually control how they receive information from the The Current website by editing notifications. Default settings have been established so that if you are the author of a resource, feedback request or comment, you will receive email from any responses. This can be changed as per your profile and account notification settings. Online Postings to Discussions/Forum NWP discussions and forums provide our members with a valuable resource for sharing their experiences and knowledge. Any communications made through the NWP site, including postings to discussion groups and forums made on the Web or via email, are understood to be publicly available and will not be afforded any privacy or limitation of distribution. External Links The The Current website contains links to other independently run websites outside the “nwp.org” domain. NWP is not responsible for the security and privacy practices or content of these external websites. Logs To help improve the user experience on our site, our system keeps two logs. The first log collects the time, IP address, page requested, and user agent string (which tells us what browser is being used). This information is used for system analysis to determine which browsers are popular with our members, which pages are most popular with members, how much time members spend on average on our site, and other general information that can be used for research purposes and system optimization. The second log records the time when a user logs into The Current, the IP address, and the The Current account number. Designated NWP staff use the login time for official purposes, including helping users with support requests and identifying abandoned accounts. NWP does not use logs for commerce-related purposes or sell information collected by logs. Some NWP applications may log additional user activity, which is solely intended for internal NWP use. Contact Information If you have any questions, please contact us at [email protected].
https://thecurrent.educatorinnovator.org/terms-of-use
Authentise has worked with a number of additive manufacturing equipment providers, such as EOS or the recently announced partnership with SLM, to connect their devices to the Authentise 3Diax platform and Manufacturing Execution System (MES). Data can now be received from ARCAM, 3D Systems, EOS, SLM, Stratasys, and HP additive manufacturing devices, among others, with more to come. The information provided by them is used to automate actions through Authentise MES. Examples of these actions include automatic order updates, in-depth traceability report creation or the training of machine learning models that can, for example, improve the accuracy of cost, time and maintenance estimates. This reduces cost, improves reliability and increases output. The machine data is also available independently through the Machine Analytics Module of the Authentise 3Diax platform. The Module allows users of additive manufacturing technology to create and use their own additive manufacturing automation workflows or to tie the data back into existing IT systems such as Enterprise Resource Planning (ERP) tools. Machine and software suppliers may also use the Machine Analytics Module to create and distribute their own Industry 4.0 solutions. "In many ways additive manufacturing is not taking advantage of its digital opportunity"
https://www.prlog.org/12691369-authentise-now-leader-in-additive-manufacturing-data-connectivity.html
The Content Moderation core module was marked stable in Drupal 8.5. Think of it like the contributed module Workbench Moderation in Drupal 7, but without all the Workbench editor Views that never seemed to completely make sense. The Drupal.org documentation gives a good overview. Content Moderation requires the Workflows core module, allowing you to set up custom editorial workflows. I've been doing some work with this for a new site for a large organization, and have some tips and tricks. Less Is More Resist increases in roles, workflows, and workflow states and make sure they are justified by a business need. Stakeholders may ask for many roles and many workflow states without knowing the increased complexity and likelihood of editorial confusion that results. If you create an editorial workflow that is too strict and complex, editors will tend to find ways to work around the system. A good compromise is to ask that the team tries something simple first and adds complexity down the line if needed. Try to use the same workflow on all content types if you can. It makes a much simpler mental model for everyone. Transitions are Key Transitions between workflow states will be what you assign as permissions to roles. Typically, you'll want to lock down who can publish content, allowing content contributors to create new drafts only. You might want some paper to map out all the paths between workflow states that content might go through. The transitions should be named as verbs. If you can't think of a clear, descriptive verb that applies, you can go with 'Set state to %your_state" or "Mark as %your_state." Don't sweat the names of transitions too much though; they don't seem to ever appear in an editor-facing way anyway. Don't forget to allow editors to undo transitions. If they can change the state from "Needs Work" to "Needs Review," make sure they can change it back to "Needs Work." You must allow Non-Transitions Make sure the transitions include non-transitions. The transitions represent which options will be available for the state when you edit content. In the above (default core) example, it is not possible to edit archived content and maintain the same state of archived. You'd have to change the status to published and then back to archived. In fact, it would be very easy to accidentally publish what you had archived, because editing the content will set it back to published as the default setting. Therefore, make sure that draft content can stay as draft when edited, etc. Transition Ordering is Crucial Ordering of the transitions here is very important because the state options on the content editing form will appear as a select list of states ordered by the transition order, and it will default to the first available one. If an editor misses setting this option correctly, they will simply get the first transition, so make sure that first transition is a good default. To set the right order, you have to map each state to what should be its default value when editing. You may have to add additional transitions to make this all make sense. As for the ordering of workflow states themselves, this will only affect ordering when states are listed, for example in a Views exposed filter of workflow states or within the workflows administration. Minimize Accidental Transitions But why wouldn't my content's workflow state stay the same by default when editing the content (assuming the user has access to a transition that keeps it the same)? I have to set an order correctly to keep a default value from being lost? Well, that's a bug as of 8.5.3 that will be fixed in the next 8.5 bugfix release. You can add the patch to your composer.json file if you're tired of your workflow states getting accidentally changed. Test your Workflow With all the states, transitions, transition ordering, roles, and permissions, there are plenty of opportunities for misconfiguration even for a total pro with great attention to detail like yourself. Make sure you run through each scenario using each role. Then document the setup in your site's editor documentation while it's all fresh and clear in your mind. What DOES Published EVEN MEAN ANYMORE? With Content Moderation, the term "published" now has two meanings. Both content and content revisions can be published (but only content can be unpublished). For content, publishing status is a boolean, as it has always been. When you view published content, you will be viewing the latest revision, which is in a published workflow state. For a content revision, "published" is a workflow state. Therefore, when you view the content administration page, which shows you content, not content revisions, status refers to the publishing status of the content, and does not give you any information on whether there are unpublished new revisions. Where's my Moderation Dashboard? From the content administration page, there is a tab for "moderated content." This is where you can send your editors to see if there is content with drafts they need to review. Unfortunately, it's not a very useful report since it has neither filtering nor sorting. Luckily work has been done recently to make the Views integration for Content Moderation/Workflows decent, so I was able to replace this dashboard with a View and shared the config. Reviewer Access In a typical editorial workflow, content editors create draft edits and then need to solicit feedback and approval from stakeholders or even a legal team. To use content moderation, these stakeholders need to have Drupal accounts and log in to look at the "Latest Revision" tab on the content. This is an obstacle for many organizations because the stakeholders are either very busy, not very web-savvy, or both. You may get requests for a workflow in which content creation and review takes place on a non-live environment and then require some sort of automated content deployment process. Content deployment across environments is possible using the Deploy module, but there is a lot of inherent complexity involved that you'll want to avoid if you can. I created an Access Latest module that allows editors to share links with an access token that lets reviewers see the latest revision without logging in. Log Messages BUG As of 8.5.3, you may run into a bug in which users without "administer content" permission cannot add a revision log message when they edit content. There are a fewissues related to this, and the fix should be out in the next bugfix release. I had success with this patch and then re-saving all my content types.
https://www.zivtech.com/blog/drupal-8-content-moderation-tips-tricks
IT Services Q4 Earnings Review - Strong Growth Momentum In Key Sectors: Prabhudas Lilladher BQ Blue’s special research section collates quality and in-depth equity and economy research reports from across India’s top brokerages, asset managers and research agencies. These reports offer BloombergQuint’s subscribers an opportunity to expand their understanding of companies, sectors and the economy. Prabhudas Lilladher Report Tier-I IT Services’ revenue growth in Q4 FY21 was tad lower at 3.7% QoQ U.S. dollar as compared to previous two quarters. Three out of five tier-I IT companies reported miss on revenue estimates. However, we still continue to be confident of strong demand environment and miss is only due to moderation of revenue growth post two quarters of strong growth. Revenue growth was led by strong demand momentum in banking financial services and insurance and sustained recovery in manufacturing and retail sectors. Communications vertical was weak for all companies. Click on the attachment to read the full report: DISCLAIMER This report is authored by an external party. BloombergQuint does not vouch for the accuracy of its contents nor is responsible for them in any way. The contents of this section do not constitute investment advice. For that you must always consult an expert based on your individual needs. The views expressed in the report are that of the author entity and do not represent the views of BloombergQuint. Users have no license to copy, modify, or distribute the content without permission of the Original Owner.
https://www.bloombergquint.com/research-reports/it-services-q4-earnings-review-strong-growth-momentum-in-key-sectors-prabhudas-lilladher
The Telecommunications Policy Review Panel was established by the Minister of Industry on April 11, 2005, to conduct a review of Canada's telecommunications framework. The Panel was asked in particular to recommend on: 1. how to implement an efficient, fair, functional and forward-looking regulatory framework that serves Canadian consumers and businesses, and that can adapt to a changing technological landscape, 2. mechanisms to ensure that all Canadians continue to have an appropriate level of access to modern telecommunications services, 3. measures to promote the development, adoption and expanded use of advanced telecommunications services across the economy. The Panel's reviewed Canada's telecommunications policy and regulatory framework and made recommendations on how to make it a model of 21st century regulation. The Final Report of the Telecommunications Policy Review Panel 2006 is available here. © Copyright 2017, ITU The information presented within this blog comes from various organizations around the world. ITU encourages users to seek more detailed information from the original source through the links provided. Links to third-party websites are provided for the convenience of all users. The ITU is not responsible for the accuracy, currency or the reliability of the content on these third-party websites. ITU does not offer any guarantee in that regard nor does ITU endorse the third-party organizations, their sites or content.
http://www.itu.int/ITU-D/cyb/newslog/Canada+Final+Report+Of+Telecommunications+Policy+Review+Panel+2006.aspx
What are the main issues with content moderation today? A recent report published by NYU, shows that there is over 3 billion pieces of content on Facebook (in the first quarter of 2020) that is the responsibility for content moderators to check; remove or provide a warning ‘cover’ of disturbing content before viewing. Facebook founder and CEO, Mark Zuckerberg recently reported in a 2018 Whitepaper , Facebook’s review teams “make the wrong call in 1 out of 10 cases”, which can be a result of relying on AI to identify harmful content, or the pressure and lack of training with moderators. With this type of role, comes a great deal of pressure and responsibility to ensure the safety of the community, 24/7 (2.6 billion active users daily). One of the main issues content moderators face today, is the hundreds of items they are required to moderate within a six to eight-hour shift. Therefore, expertise is essential, as it is up to content moderators to act with governance to uphold high standards. Content is not responsible of the platform, this is the freedom users have for ‘free speech’, but the onus is on the moderators to control obscenity showcased to them. Subsequently, the second issue is the pressure of fulfilling these number of items to moderate. Setting high targets and efficiency rates can prove to be unattainable and have the consequences of diminished performance and mental health and wellbeing. Recommendations from NYU The NYU report discusses recommendations major social media platforms can do to improve their content moderation. While the main theme of the article is constructed on the basis “A call for outsourcing”, we can conversely demonstrate outsourcing is instrumental to content moderation, moreover how we align with these recommendations outlined in the report. Human first approach when outsourcing content moderation At Webhelp, we know many mistakes have been done concerning content moderation services, therefore we decided when we entered this ‘community service’, to adopt a completely different approach – 74% of our operators recommend Webhelp as an employer (NPS). Investing in people A human first approach to content moderation is Webhelp’s understanding that people’s mental health and wellbeing is not to be disregarded when managing afflictive content. Wellness is our differentiator, enabled through our Webhealth Wellness Programme: - Mental Health Awareness training is provided for managers to recognise symptoms of stress, and the coping mechanisms to support colleagues - providing a safe, working environment to ensure colleagues have a sense of security, trust, and reliability. - access to certified Psychologists, counselors, and trained coaches to support content moderators with mental, physical, financial, and nutritional health. Wellbeing Analytics to take proactive action As part of our approach to content moderators and their mental health, we monitor their performance using Wellbeing Analytics. Using this tool enables us to identify issues through a combination of observing colleagues, using data analytics and machine learning for proactive action. Team leaders and coaches will have daily updates on colleagues MTI score which indicates how colleagues are performing and , identify ; this allows supervisors to take appropriate actions to support them, for example, reworking a shift or allow for longer breaks – 100% of our operators moderating sensitive content have shorter shifts which achieves up to 4 points of attrition reduction. Improving content moderation Managing content moderation is not to be taken lightly. It requires expertise and knowledge about this area and understanding there is a balance between the impact it has on individual’s wellbeing and the value it adds to first and third parties. Outsourcing for content moderation is a way in which social media companies can employ experts within that field to deliver outcomes and improve performance. As NYU has reported, content moderation should not be outsourced because it lacks on moderator’s health and wellbeing. As we have demonstrated above, we have a strong focus on this. Not all outsourcing is conducted by ‘customer service centres’ that exploit their team without support, on the contrary. Taking a human-first approach with our Webhealth programme and Wellbeing Analytics tool enables colleagues to develop their understanding of mental health and is essential in proving a safe, healthy environment for moderators.
https://webhelp.com/news/outsourcing-content-moderation-adding-value-to-first-and-third-parties-with-a-human-first-approach-2/
On 15 December 2020, the European Commission announced and published two legislative proposals focused on the digital single market. These proposals form part of the European Commission’s Digital Single Market Strategy that it commenced in 2015. Among the proposals was the Digital Services Act (DSA), the aim of which is to ensure “a safe, predictable and trusted online environment”. The proposal states that “the use of [internet intermediary services] has…become the source of new risks and challenges, both for society as a whole and individuals using such services”. The DSA is thus the Commission’s latest attempt at addressing the perceived risks arising from internet platforms by augmenting the existing regulatory framework, in particular the E-Commerce Directive (ECD). Before the introduction of the ECD, Member States regulated online intermediaries in their own way, of which at the time was only a novel and relatively small sector of the economy. Eventually though the Commission, in attempt to harmonise these rules, introduced the ECD that included liability protections for internet platforms. The original rationale for the ECD echoed the legal developments taking place in the US in the 1990s. Both the Directive and section 230 of the Communications Decency Act of 1996 were based on the idea that internet platforms were mere conduits in the information age. So long as they remained in their passive roles, there was a lack of an equitable basis to make such platforms liable for the activity of its users. It was this kind of thinking that led to the creation of the ‘safe harbour’ limited liability regime contained in the ECD. Under Article 14, internet platforms are not liable for content on its platform, uploaded by its users, that is illegal content. This is as long as the platform does not have actual knowledge of such illegality and, when that illegality is brought to its attention, it acts expeditiously to remove or disable access to that content. In addition, Article 15 states that even where an internet platform is required to remove or disable access to illegal content, it cannot be subject to a general obligation to monitor the content uploaded to its platform by users. In other words, there is no requirement to proactively detect infringing content and ensure its removal. Both Articles 14 and 15 of the ECD make up the safe harbour; a regime of limited regulatory interference which has resulted in a broad freedom of commerce facilitating the growth of today’s tech giants. Yet, under this regime, such platforms have not only assumed great market domination with incredible lucrative success, but also a tremendous amount of political power and influence sometimes even surpassing that of a State. Never before has so much of our conversations and cognitive exchanges been “mediated and moderated by private technology firms”. But even in the midst of this grand transformation, two views could be had. On the one hand, internet platforms, in order to protect the individual right to freedom of expression online, “should be protected from measures requiring entries to be erased from indexes, choke-points to be inserted in pipes, and filters to be installed in hosts”. On the other hand, “they profit from the creative endeavours of others, ignore wider social responsibilities, wilfully turn a blind eye to unlawful content coursing through their systems and refuse to install or close gates that would staunch the flow”. It is this view that would seem to have captured the imaginations of those in Brussels and led to the creation of the Digital Single Market Strategy. Thus, the DSA marks a significant turning point for the regulation of internet platforms in the EU. The proposal “seeks to ensure the best conditions for the provision of innovative digital services in the internal market, to contribute to online safety and the protection of fundamental rights, and to set a robust and durable governance structure for the effective supervision of providers of intermediary services”. A Layered Approach The application of the rules contained in the DSA take a layered approach based on the type and size of the internet intermediary in question. Accordingly, different obligations are placed on different intermediaries. The starting point for this differentiation is Article 2, which contains the definitions under the Regulation. Under paragraph (f) of that Article, an “intermediary service” could refer to one of three types of entities. A “mere conduit” is a service that involves the transmission of information through a communication network or the provision of access to a communication network. A “caching” service means a service involving the transmission of information via a communication network entailing the automatic, intermediate and temporary storage of that information. Examples of entities falling within these categories include internet service providers or domain name registrars. The third type of entity is a “hosting” service. This is a service that, simply put, involves the storage of user-generated content (UGC). This would include, for instance, cloud or web hosting services like Amazon Web Services or WordPress.com (on which this website is hosted). Article 2(h) develops this concept further with the definition of an “online platform”: a provider of a hosting service that stores and disseminates UGC to the public. This is unless the service is ancillary to the provision of another service, such as the comments section of a news website. Social media platforms would thus certainly fall within the definition of an online platform for the purposes of the DSA. Article 25 then adds a further layer with its definition of “very large online platforms” (VLOP). This is an online platform that provides their services to a number of average monthly active users of the service in the Union equal to or high than 45 million. An obvious example of this would be Facebook which, in the second quarter of 2020, had over 400 million active monthly users in the EU. The obligations under the Regulation, for the most part, are thus predicated on these definitions, with VLOPs being subjected to the most stringent rules. Territorial Application The application of the DSA is determined by Article 1(3). That provision states that the Regulation applies to intermediary services provided to users residing in the EU. This is regardless of where the service provider may be based. This echoes the extraterritorial reach of the GDPR and once again signals the EU’s desire to regulate the digital realm beyond its own borders. The question of whether an intermediary can be considered to be providing its service to EU users is determined by the “substantial connection” test. A substantial connection between the intermediary and EU users can be demonstrated by either the existence of a significant number of EU users or the targeting of activities towards the EU. Such targeting activities could include, for example, the availability of an application in the relevant national application store. However, the mere accessibility of a website from the EU cannot, on this ground alone, constitute a substantial connection with the EU. Thus, the DSA is capable of applying to the many US intermediaries based outside of the EU. In terms of the allocation of enforcement responsibilities among the Member States, this depends on whether a Member State has the necessary jurisdiction. Under Article 40, there a three ways to determine whether a Member State has jurisdiction over an intermediary. Firstly, and most straightforwardly, a Member State has jurisdiction over intermediaries with a main establishment located in that Member State. A main establishment is the place where an intermediary has its head office or registered office where the principal financial functions and operational control are exercised. Secondly, where an intermediary is not based in the EU, but offers its services in the EU, the Member State where its legal representative resides or is established has jurisdiction. Intermediaries outside of the EU without a main establishment are responsible for designating a legal representative who deals with compliance issues directly with either Member State authorities or the other authorities who can enforce the Regulation. Thirdly, where an intermediary is not based in the EU and has not appointed a legal representative, then all Member States in the EU have jurisdiction to enforce the Regulation against that intermediary. However, the public international law principle of ne bis in idem applies, which means that nobody should be judged twice for the same offence. Thus, Member States must cooperate with each other when enforcing the DSA in this context. Get Your House in Order One of the most notable aspects of the proposed DSA are the content moderation rules. Under the Regulation, “content moderation” means the detection, identification and addressing of illegal content or content that infringes an intermediary services’ terms and conditions (T&Cs). This includes adjusting the availability, visibility or accessibility of content. The DSA therefore contemplates content moderation to involve the demotion, disabling of access or the removal of content and even the termination of a users’ account. Under the DSA, such content moderation can, essentially, be carried out by intermediaries on two different bases. The first basis is voluntary; under Article 6, intermediaries may engage in voluntary own-initiative investigations to detect, identify and remove or disable access to illegal content. However, when dealing with UGC that infringes the T&Cs of the service, intermediaries must act in a diligent, objective and proportionate manner in applying and enforcing such policies. This includes having due regard to the rights and legitimate interests of all parties involved, together with the fundamental rights of users under the EU Charter of Fundamental Rights. To support such efforts, intermediaries engaging in voluntary content moderation do not lose the limited conditional liability codified under the DSA of which closely resembles the safe harbour under the ECD. Thus, under Article 5(1), hosting services (which includes online platforms and VLOPs), will not be liable for illegal UGC if such services do not have actual knowledge of the illegal content and, where it does, it acts expeditiously to remove that content. In addition, Article 7 provides that no intermediary services are required to monitor content on their platform or actively to seek facts or circumstances indicating illegal activity. The imposition of limited conditional liability with regard to voluntary content moderation has been dubbed as the ‘Good Samaritan principle’. This is where “online intermediaries are not penalized for good faith measures against illegal or other forms of inappropriate content”. The second basis on which content moderation can take place is where such content moderation is mandatory under the Regulation. Under Article 8, intermediaries must comply with orders from judicial or national authorities to take down illegal content. In doing so, the intermediary must, without undue delay, inform the authority giving the order of how it has given effect to the order, specifying the action taken and the when the action was taken. Under Article 14, hosting service providers must implement mechanisms allowing any of its users to notify the service provider of content on its platform that may be illegal. This notice-and-takedown mechanism (N&A) must be easy to access, user-friendly, and allow for the submission of notices exclusively by electronic means. The notice submitted by a user must include, among other things, an explanation as to why they believe the content to be illegal and a statement confirming their good faith belief that the information and allegations contained in the notice are accurate and complete. Such notices constitute actual knowledge for the purposes of Article 5(1) and the hosting service provider must inform the user making the submission of the decision taken without undue delay. In addition, such notices must be processed in a timely, diligent and objective manner. The N&A mechanism is a clear augmentation of the ECD provisions by prescribing in greater detail how intermediaries are to deal with illegal content that is flagged to them by users. However, it is interesting to note that the N&A mechanism stipulated under Article 14 does not apply to content that may infringe an intermediary services’ T&Cs and only applies to illegal content as defined under the DSA. This suggests that intermediaries are free to enforce their T&Cs as they see fit under the guise of voluntary content moderation and thus benefitting from the limited conditional liability applied to such activity. This leads into one of the problems with the content moderation rules in the Regulation, in particular the Good Samaritan principle under Article 6. As the Center for Democracy and Technology (CDT) points out, the DSA “shields intermediaries from liability for their own efforts to remove illegal content but exposes them to liability based on mere assertions by anyone”. Under the proposal, the light-touch approach to liability embodied by the ECD is applied when intermediaries carry out content moderation on their own accord. This thus gifts them the ability to exercise a broad discretion as to how they may deal with content infringing their own T&Cs or even illegal content that they detect independently. However, a greater problem with the content moderation rules under the DSA is that, ultimately, private actors will be assuming State-like responsibilities in policing their platforms. In particular, intermediaries will be playing three distinct roles: they will be acting like a legislature when “defining what constitutes legitimate content on their platforms”; they will be acting like judges “who determine the legitimacy of content in particular instances; they will be acting like administrative agencies “who act on [their own] adjudications to block illegitimate content”. The problem with intermediaries taking on these roles, especially the judicial role, is that there exists a conflict of interest. On the one hand, internet intermediaries “are commercial players which compete in data capitalists markets for users, business partners, and data-driven innovation”. On the other hand, they are required to use their technical capabilities to moderate activity on their platforms, which for social media platforms inevitably involves the regulation of people’s speech. As such, the DSA potentially “blurs the distinction between private interests and public responsibilities”. Furthermore, intermediaries are becoming increasingly reliant on AI-powered content filtering systems to moderate their platforms at scale. However, such systems “effectively blend norm setting, law enforcement, and adjudication powers”. In particular, content filters are not always successful at detecting the nuances of UGC that may not necessarily render it illegal. A common example of this is in relation to copyright, whereby considerations must be made as to whether certain UCG benefits from ‘fair use’ or another lawful exception under the relevant copyright law. Content filters may not always detect when these exceptions apply and thus such “errors in algorithmic content moderation may result in censoring legitimate content, and sometimes also in disproportionately censoring some groups”. The controversies surrounding the delegation of public responsibilities to private actors are more heightened in the context of online speech. In its proposal, the Commission states that ‘harmful’ content, while not necessarily constituting illegal content, will not be defined by the DSA nor be subject to removal obligations since “this is a delicate area with severe implications for the protection of freedom of expression”. However, such regulation may nevertheless come through the backdoor due to the definition of “illegal content” provided in the DSA; it includes any information (either in itself or by reference to an activity, including the sale of goods or the provision of services) which is not in compliance with Union law or the law of a Member State. The potential problem here is that such a definition plugs the DSA into a body of caselaw from the European Court of Human Rights (ECtHR) that has, so far, lacked clarity on the question of so-called “hate speech”. More specifically, the Court has somewhat struggled on “the demarcation line between types of harmful expression that ordinarily are entitled to protection and the most harmful types of expression that attack the values of the [European Convention on Human Rights] and therefore do not enjoy protection”. The EU is a signatory to the European Convention on Human Rights (the Convention) and thus binds all of its Member States. Furthermore, the meaning and scope of the rights contained in the EU Charter must be the same as those laid down by the Convention so far as the rights contained in either text correspond with each other. Article 10 of the Convention, as well as Article 11 of the Charter, states that everyone has the right to freedom of expression. This includes the freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. However, such a right is not without limitations as it carries with it duties and responsibilities. Thus, free expression may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society on various legitimate grounds. For example, restrictions may be placed on free expression for the protection of health or morals or for the protection of the reputation or rights of others (eg defamation). The ECtHR “has by and large interpreted Article 10 expansively and in a way that is faithful to the broad principles of freedom of expression”. In other words, free expression is the default rule, whereas its limitations are the exception of which must be explored on a case-by-case basis. This has been applied even to offending, shocking or disturbing ideas, for such ideas must be allowed to circulate to ensure “pluralism, tolerance and broadmindedness without which there is no democratic society”. Hate speech, while not appearing anywhere in the text of the Convention, is a term referring to speech that is so vulgar and offensive that it cannot possibly warrant protection under Article 10. However, the regulation of hate speech by the ECtHR has not historically been carried out on the basis on Article 10. Rather, Article 17 of the Convention has been the source of the Court’s jurisprudence on such speech. That provision states that nothing in the Convention shall be interpreted as allowing anyone to engage in any activity or perform any act aimed at the destruction of any of the rights and freedoms contained in the Convention or at their limitation to a greater extent than is provided for in the Convention. Accordingly, Article 17 acts as a “safety valve that denies protection to acts that seek to undermine the Convention and go against its letter and spirit”. However, the criteria for using Article 17 of the Convention as a basis for suppressing hate speech has been somewhat ambiguous. In Delfi AS v Estonia, the ECtHR held that an online news portal can be liable for unlawful hate speech posted on its platform. However, while the Court was clear on the question of liability, it avoided the preliminary question of what constitutes hate speech. The content in this case directly advocated for acts of violence and thus constituted hate speech which was deemed unlawful under Article 17. Even so, the lack of an analysis as to criteria for determining hate speech under that provision leaves the question open in relation to speech that does not directly advocate violence but may be considered ‘borderline’ or otherwise offensive. The balance that should be struck between Articles 10 and 17 under the Convention therefore remains to be clarified. Yet, the DSA proposes delegating that difficult question to internet intermediaries that are not necessarily focused on upholding the rule of law. A resulting concern is “the risk of over-censorship and the removal of content ‘to be on the safe side’ and to thereby avoid incurring liability for such content”. The actions of various tech companies in the aftermath of the Capitol Hill riots in January 2021 could be cited as an example of this. One could question whether internet platforms removed former President Trump’s accounts after the riots on the basis of the illegality, or at least the immorality, of his actions or rather on the basis that it was commercially expedient to do so given that other platforms were doing the same. There are some provisions in the proposed DSA that could help to mitigate the problems arising from this though. Firstly, Article 17 states that online platforms must provide users with access to an effective internal complaints handling system. It is through this system that aggrieved users should be able to, through electronic means and free of charge, lodge complaints in relation to content moderation decisions made by online platforms on the basis that the content in question was deemed to be illegal or infringed the platforms’ T&Cs. This pertains to decisions for the removal or disabling of access to content, the suspension or termination of the service or the suspension or termination of a users’ account. Online platforms must then reverse the decision made if the complaining user has presented grounds for doing so. Action must also be taken against users that abuse this complaint mechanism or N&A mechanism under Article 14. Secondly, Article 18 provides for the possibility of out-of-court dispute settlements. Under this provision, certain certified bodies can resolve disputes relating to content moderation decisions made by online platforms. Such bodies must be impartial and independent, equipped with the necessary expertise, easily accessible through electronic communication technology, capable of settling disputes swiftly, effectively and in a cost effective manner, and have clear and fair rules of procedure. Such a system is rather unprecedented and it would be interesting to see how it would work in practice. Thirdly, Article 19 makes provision for so-called ‘trusted-flaggers’. These are essentially entities that can demonstrate the necessary competence, expertise and independence in tackling illegal content. For example, organisations committed to notifying illegal, racist and xenophobic expressions online may be capable of being trusted flaggers. Online platforms must take the necessary technical and organisational measures to ensure that notices submitted by trusted flaggers relating to allegedly illegal content are processed and decided upon with priority and without delay. Black Boxes No More? In addition to the complaint mechanisms, the DSA also proposes rules for greater transparency by intermediaries. Such rules may also help to mitigate the problems arising from the judicial role assumed by intermediaries carrying out content moderation on their platforms. The starting point for this is Article 12. This provision states that intermediary service providers must include in their T&Cs information on any restrictions that they impose in relation to the use of their service in respect of UGC. This includes information on any policies, procedures, measures and tools for the purpose of content moderation, including algorithmic decision-making and human review. The use of unambiguous language is required and the T&Cs must be publicly available. One criticism that could be raised, however, is that the DSA does not articulate specific information that must be included in the T&Cs similar to the way that the GDPR standardises the content of privacy notices. Specific to decisions made by intermediaries when carrying out content moderation is Article 13, which mandates the publication of content moderation reports. Such reports must be published at least once a year and provide a detailed, clear and easily accessible account of the content moderation activities carried out by the intermediary. A wide range of information must be included in such reports, including on the number of orders received from MS authorities, notices received from users, the number and type of measures taken and the number of complaints received in respect of the measures taken (including the average time taken to process these complaints and whether any decisions were reversed). Under Article 15, where a provider of a hosting service takes a content moderation measure against a user, it must convey to the user the measure taken and a specific statement of the reasons for taking the measure. In particular, that statement must contain, inter alia, the facts and circumstances relied on in taking the decision, the legal provisions relied on if the UGC was considered to be illegal content, and the provisions of the T&Cs relied on if the UGC was in violation of those T&Cs. This information must be conveyed in a clear and easily comprehensible manner and be as precise and specific as reasonably possible under the given circumstances. The other provisions on transparency are specifically focused on online platforms as opposed to intermediary service providers in general. To begin with, Article 23 states that online platforms must produce transparency reports that contain further information in addition to that required for content moderation reports under Article 13. Among this further information includes the number of disputes submitted to the out-of-court dispute settlement bodies (Article 18) and their outcomes, the number of suspensions imposed on those abusing the complaints or N&A mechanism (as per Article 20) and any use of automatic means for content moderation activities. The proposed DSA also contains specific provisions on online advertising in relation to online platforms and VLOPs respectively. Firstly, Article 24 stipulates that, whenever a user is shown an advertisement on an online platform, certain information must be made accessible to that user by the platform. That information includes the confirmation that the content being displayed is in fact an advertisement, the person on whose behalf the advertisement is displayed, and meaningful information about the main parameters used to determine the user to whom the advertisement is displayed (ie why the user was shown the particular advertisement on display). Separately, for VLOPs displaying advertising on their platform, Article 30 states that they must compile and make publicly available through application programming interfaces (APIs) specific information. This includes the content of the advertisement, the person on whose behalf the advertisement is displayed, the period during which the advertisement was displayed, whether the advertisement was intended to be displayed specifically to one or more particular groups of users and if so the main parameters used for that purpose, and the total number of users reached with the advertisement and, where applicable, aggregate numbers for the group or groups of users to whom the advertisement was targeted specifically. Such information must be contained in a repository accessible through the API up until one year after the advertisement was displayed. Article 29 provides rules on recommender systems used by VLOPs. The DSA defines a “recommender system” as a full or partially automated system used by an online platform to suggest in its online interface specific information to users of the service. This could be as a result of a search initiated by a user or other ways of determining the relative order or prominence of the information displayed to a user. VLOPs must set out in their T&Cs, in a clear, accessible and easily comprehensible manner, the main parameters used in their recommender systems, as well as any options for the users to modify or influence those main parameters. Users must be provided with an option that exempts them from profiling, the definition of which is borrowed from Article 4(4) of the GDPR. The interface for this must provide accessible functionality where multiple options are given to users to adjust the recommender system. One could argue here that, on the basis of Articles 13(2)(f), 14(2)(g) and 22, the GDPR already requires controllers to provide such information and user-control in relation to automated decision-making systems. Thus, there is a question as to how these clashing provisions under the DSA and the GDPR could be reconciled if at all. Furthermore, these rules on recommender systems raise two other issues. Firstly, there may be barriers to explainability. For one, there is a question of whether recommender systems can be explained to users in an accessible manner, let alone provide users with the options to modify how such systems work. Some of the recommendation engines deployed by the likes of Tik Tok or Instagram utilise deep learning algorithms, the interpretability of which may not always be straightforward (of which has come to be known as the ‘blackbox’ problem, although this could be tackled to a certain extent). Also, recommendation engines often form a precious part of the intellectual property of online platforms. Accordingly, such platforms may be reluctant to explain the decision-making process of their algorithms in any great detail to users. Additionally, the rules on recommender systems in the DSA may be missing a trick in relation to internet content creators. These stakeholders engage in a novice type of economic activity that exists on the internet, which is the ability to generate revenue by displaying advertisements with the content created. For example, content creators on YouTube, if they meet the required criteria, have the option to place advertisements in or around their videos. These advertisements are generated by YouTube and derive from various companies or brands that the platform may have branding arrangements with. Where the advertisement placed by the content creator is viewed by a user, that creator receives a portion of the generated revenue. Generally, the more views a creator can attain, the greater the revenue they can generate. However, changes that YouTube makes to its content moderation algorithms, in particular any recommendation engines, can impact the views obtained by a creator and thus the revenue that they can generate. Thus, many creators “feel that their livelihoods hang at the whims of mysterious algorithms”. It is acknowledged in Recital (62) of the proposed DSA that such recommendation engines can have a significant impact on the ability of users to retrieve and interact with information. However, while the DSA does require transparency on the use of such recommendation engines, this transparency is only focused on content that is illegal or infringes the T&Cs. Such rules do pertain to the modification of recommendation engines at the potential expense of content creators. This could end up being a significant oversight in the future. Know Your Platform Apart from the transparency requirements imposed by the DSA, the proposed Regulation also contains a number of risk management provisions exclusively aimed at VLOPs. Under Recital (56), it is stated that VLOPs are used in a way that strongly influences safety online, the shaping of public opinion and discourse, as well as online trade. Accordingly, without effective regulation and enforcement, such platforms may not identify and mitigate the risks and societal and economic harms that they can exist on their platforms. The risk management provisions under the DSA thus focus on obligations around both risk identification and mitigation. This starts with Article 26; VLOPs are required to carry out risk assessments on their platforms at least once a year. Such assessments must identify, analyse and assess any significant systemic risks stemming from the functioning and use made of their services in the EU. Article 26 lists the systemic risks that should be identified in the assessment, of which extend beyond the dissemination of illegal content by including two other broadly-worded systemic risks. Firstly, there are the negative effects for the exercise of fundamental rights under the EU Charter, including Articles 7 (right to privacy), 11 (freedom of expression and information), 21 (non-discrimination) and 24 (the rights of the child). Secondly, there is the intentional manipulation of the service, including by means of inauthentic use or automated exploitation of the service, with an actual or foreseeable negative effect on the protection of public health, minors, civic discourse, or actual or foreseeable effects related to electoral processes and public security. There are thus a wide variety of systemic risks that VLOPs must ascertain on their platform. This includes illegal hate speech, counterfeit products, methods for silencing speech or hampering competition, fake accounts and the use of bots. In conducting these risk assessments, VLOPs must consider how their content moderation systems, recommender systems and systems for selecting and displaying advertisements influence any of the systemic risks, including the potentially rapid and wide dissemination of illegal content and of information that is incompatible with their T&Cs. After identifying these systemic risks, Article 27 states that VLOPs must implement reasonable, proportionate and effective mitigation measures to address those specific risks. Such measures may include adapting content moderation or recommender systems, targeted measures aimed at limiting the display of advertisements, or reinforcing the internal processes or supervision to detect systemic risks. The measures listed in Article 26 are not exhaustive but any other mitigation measures implemented by VLOPs must be effective and appropriate for the specific risks identified on the platform and be proportionate in light of the platform’s economic capacity and the need to avoid unnecessary restrictions on the use of their service. Article 28 stipulates a particularly onerous obligation on VLOPs that completes the risk management regime imposed on such platforms under the proposed DSA. That Article states that VLOPs shall be subject, at their own expense and at least once a year, to audits assessing compliance with the Regulation, including its transparency and due diligence obligations. The audit must be completed by organisations that are independent from the VLOP, have proven expertise in the area of risk management, technical competence and capabilities, and have proven objectivity and professional ethics, based in particular on adherence to codes of practice or appropriate standards. The required contents of the audit report are also stipulated in Article 28. This includes the main findings, either a positive or negative opinion on whether the VLOP complied with its obligations under the DSA, and where the opinion is not positive, operational recommendations on specific measures to achieve compliance. In addition, within one month of receiving those recommendations, the VLOP must adopt an audit implementation report setting out the necessary measures to implement the recommendations. Reasons and alternative measures must be contained in the implementation report where the VLOP does not adopt measures to implement the recommendations from the audit report. These risk management provisions would mark a fairly drastic change in the regulation of internet intermediaries in the EU going well beyond what was previously envisaged by the ECD. However, it is not clear how the risk management provisions are consistent with the prohibition against general monitoring under Article 7. Recital (28) provides that nothing in the DSA should be construed as an imposition of a general monitoring obligation or active fact-finding obligation, or as a general obligation for providers to take proactive measures in relation to illegal content. This is despite the fact that certain risk mitigation measures, such as reinforcing the internal processes or supervision to detect systemic risks (under Article 27), may implicitly require, or at least encourage, a form of general monitoring to implement such measures. Conclusion Overall, the DSA represents a stark evolution in the regulation of internet platforms in the EU. It does so in particular by mandating certain procedures and transparency regarding content moderation as well as imposing detailed risk management obligations. Thus, the DSA moves away from the self-regulation approach of the past 20 years and embraces more granular legislation accompanied by aggressive sanctions; fines for non-compliance for VLOPs can be as high as 6 per cent of total turnover. Given its significance, the question that many will be asking is when the proposed DSA will turn into binding law. The next step will be for the European Parliament and the Council of the European Union to scrutinise the proposals, after which a final text will need to be agreed. This could take several years, much like the GDPR which was first proposed in January 2012 and eventually adopted in December 2015. Although, given the importance and imperativeness of the DSA’s subject matter, the process for adoption could be quicker. This will ultimately depend on the level of agreement between the different EU institutions and the Member States. However, the longer the adoption process takes, the more inclined other Member States may be to take matters into their own hands. France, for instance, is working on its own legislation similar to the DSA. As such, the Commission has warned the tech platforms that, unless they want to be hit with a fragmented regulatory landscape in Europe, they should work with the EU to ensure the passage of the DSA. But given the disruptive lobbying known to be deployed by some of these platforms, with the proposed e-Privacy Regulation being the most recent example of this, it remains to be seen how collaborative such companies will be this time around. Proposal for a Regulation of the European Parliament and the of the Council on a Single Market for Digital Services (Digital Services Act) and amending Directive 2000/31/EC (15 December 2020), Article 1(2)(b). Ibid, p.1. Jamie Susskind, Future Politics: Living Together in a World Transformed by Tech (OUP 2018), 190. Graham Smith, Internet Law and Regulation (5th edn Sweet and Maxwell 2020), [5-063]. Ibid. DSA (n 1), p.2. Ibid, Recital (7). Ibid, Recital (8). Ibid. Ibid. Ibid, Article 40(1). Ibid, Recital (76). Ibid, Article 40(2). Ibid, Article 11(2). Ibid, Article 40(3). Ibid. Ibid, Article 2(p). Ibid. Ibid. Giancarlo Frosio (ed), The Oxford Handbook of Online Intermediary Liability (2020 OUP), 669. Ibid, 671. Ibid, 670. Ibid, 671. Ibid, 672. DSA (n 1), p.9. Ibid, Article 2(g). Frosio (n 20), 484. European Charter of Fundamental Rights, Article 52(3). Handyside v UK, App no. 5493/72 (ECHR, 7 December 1976), . Frosio (n 20), 469. Delfi AS v Estonia, App no. 64569/09 (ECHR, 16 June 2015). Frosio (n 20), 483. DSA (n 1), Article 20. DSA (n 1), Recital (46). See Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), Articles 13 and 14. In order to generate ad revenue from videos, the creator has to turn on video monetisation for their YouTube channel. In order to turn on video monetisation, the creator has to be part of the YouTube Partner Programme. Once admitted into this programme, the creator will have the option to monetise their videos. However, if choosing to place ads on their videos, the video must meet the advertiser-friendly content guidelines. DSA (n 1), Recital (57). Ibid, Recital (58). Ibid, Article 59. Other Sources: Sacha Green and Inge Govaere, The Internal Market 2.0 (Hart Publishing 2020) France pushes for big changes to proposed EU tech regulation ‘I can’t trust YouTube anymore’: creators speak out in Google advertising row European Commission Proposes New Rules for Digital Platforms | Wilson Sonsini First look at the Digital Services Act and Digital Markets Acts. Do they live up to the expectations?
https://www.thecybersolicitor.com/p/legal-knots-vlops-and-a-white-box
The PACT Act does four things. First, it requires online platforms to explain their content moderation practices in an “acceptable use policy” that is easily accessible to consumers, and to publish a detailed biannual report that includes disaggregated statistics on content that has been removed, demonetized, or deprioritized. Second, it introduces an obligation for large online platforms to put in place a formal complaint system that processes reports and notifies users of moderation decisions within 21 days. These systems should also allow consumers to appeal content moderation decisions. Third, it amends Section 230 to require large online platforms to remove court-determined illegal content and activity within 4 days. The PACT Act also opens tech platforms up to civil lawsuits from federal regulators, and gives license to State attorneys general to enforce federal civil law against them. These accountability requirements will be less stringent for small online platforms, depending on their size and capacity.
https://decode.org/projects/legislative-hub/pact-act-platform-accountability-and-consumer-transparency-act-2/
Betty Li Hou, '22, is a Santa Clara University Graduate in Computer Science and Engineering and was a 2021-2022 Hackworth Fellow at the Markkula Center for Applied Ethics at Santa Clara University. Brian Patrick Green, is the director of the technology ethics program area at the Markkula Center for Applied Ethics at Santa Clara University. Views are their own. View and download: "A Multilevel Framework for the AI Alignment Problem" as a PDF. Introduction: AI Ethics “‘You were going to kill that guy!’ ‘Of course. I’m a Terminator.’” Lines like this from the 1991 James Cameron film Terminator 2: Judgment Day presented a dark warning for powerful, malicious artificial intelligence (AI) . While a cyborg assassin traveling back in time has not yet become a major concern for us, what has become apparent is the multitude of ways in which AI is used on a global scale, and with it, the risk of both direct and indirect negative effects on our political, economic, and social structures. From social media algorithms, to smart home devices, to semi-autonomous vehicles, AI has found its way into nearly every aspect of our everyday lives. With this new realm of technology, we must thoroughly understand and work to address the risks in order to navigate the space and use the technology wisely. This is the field of AI ethics, specifically AI safety. AI Alignment AI is written to do tasks effectively and efficiently, but it does not have the abilities of judgment, inference, and understanding the way humans naturally do. This leads to the AI alignment problem: AI alignment is the issue of how we can encode AI systems in a way that is compatible with human moral values. The problem becomes complex when there are multiple values that we want to prioritize in a system. For example, we might want both speed and accuracy out of a system performing a morally relevant task, such as online content moderation. If these values are conflicting to any extent, then it is impossible to maximize for both. AI alignment becomes even more important when the systems operate at a scale where humans cannot feasibly evaluate every decision made to check whether it was performed in a responsible and ethical manner. The alignment problem has two parts. The first is the technical aspect which focuses on how to formally encode values and principles into AI so that it does what it ought to do in a reliable manner. Cases of unintended negative side effects and reward hacking can result if this is not done properly . The second part of the alignment problem is normative, which asks what moral values or principles, if any, we should encode in AI. To this end, we present a framework to consider the question at four levels. [3, 4] Breaking Down the AI Alignment Problem Individual & Familial On the individual level, the framework invites individuals and families to ask questions about values and flourishing. In our everyday actions, we are shaping our own definitions of individual flourishing—what makes life fulfilling and brings contentment. We must consider what role models and lifestyles we seek to emulate, how we define success for ourselves, what sacrifices we are willing to make, and what ethical values we prioritize. Organizational The organizational level refers to corporations, state and local governments, universities, churches, social movements, and various other groups in civil society. When considering alignment at this level, we must determine what values the organization operates on, what values are instilled in its products and services, and what role the organization plays within society. For institutions, important considerations are what constitutes success, what metrics are used to evaluate success, and how they are involved in the broader movements for AI alignment. National The next level is the national level. Each nation has either implicitly or explicitly defined values that determine the country’s goals and objectives pertaining to AI. A country aiming to assert itself as a global power may invest resources into building a domestic AI industry, as well as regulate the usage of AI to moderate and nudge users’ behaviors towards particular views. On the other hand, a country aiming to promote freedom may follow a decentralized approach to AI production, giving firms freedom and privacy while allowing for competition amongst firms. Alternatively, countries may try to build an AI initiative in a way that not only ensures that they are aligned with moral values, but also encourages or requires other countries to do so. Global Globally, humankind must think about the kind of future we want to have. The recently articulated United Nations Sustainable Development Goals (SDGs) offer a good starting point, but these goals are merely the preconditions necessary for survival and flourishing, so they are not enough . A further step is needed to determine our common goals as a civilization, and more philosophically, the purpose of human existence, and how AI will fit into it. Is it to survive, raise children, live in society, seek the truth, etc.? Related to this are the end goals of economic and political structures, as well as what powerful nations and corporations need to give up in order to attend to the needs of the poor and the earth. Putting the Levels Together All of these levels interact with each other. Because AI typically originates from the organizational level, often in profit driven corporations, the primary motivation is often simply to make money. However, when put in the context of these other levels, further goals should become visible: 1) AI development should be aligned to individual and familial needs, 2) AI development should align with national interests, and 3) AI development should contribute to human survival and flourishing on the global level. But other layers in the framework also interact with each other, through inputs and outputs. For example, looking at the same organizational layer from the inbound perspective, individuals can choose whether or not to buy certain kinds of technologies, nations can pass laws and regulations to control what technology companies can do, and at the global level, international pressure (for example from the UN through ideas such as the Sustainable Development Goals) can also influence technology company behavior. Of note, these levels can have intermediate levels too, such as the European Union–which is above national but below global, and which has, through GDPR, had a major influence on the internet, data, and through those, AI. Examining the individual level, we have already seen how it influences and is influenced by the organizational level. The individual level can influence the national through elections, and the global through organizations such as the UN, although these influences are quite underdeveloped. Similarly, the global can influence individuals through international treaties, while nations obviously exert significant control over their citizens through laws and other behavioral expectations. Lastly, the national and global levels interact. Nations influence the global state of the Earth, for example through war and other national policies with global effects (such as energy policies which can drive or mitigate climate change.) The global level can exert power back, whether through the UN or other international expectations of national behavior. To get a more practical view of the framework, we look at the problem of social media content moderation. Content Moderation as an Example A global debate has emerged in recent years on the risks faced by internet users. User generated content is not subject to the same editorial controls as traditional media, which enables users to post content that could harm others, particularly children or vulnerable people. This includes but is not limited to content promoting terrorism, child abuse material, hate speech, sexual content, and violent or extremist content. Yet at the same time, attempts to restrict this content can seem to some like it violates user freedom of expression and freedom to hear certain kinds of expression. Organizations and governments have grappled with the feasibility and ethics of mitigating these potential harms through content moderation, while at the same time trying not to lose users who feel that their freedoms are being curtailed. AI-assisted content moderation brings a level of speed and scale unmatched by manual moderation. A transparency report from Google (which owns the YouTube service) shows that over 90% of videos removed on YouTube between January and March 2022 were reviewed as a result of automatic flagging . However, these approaches have implications for people’s future uses and attitudes towards online content sharing, so it is important that the AI employed in these processes aligns with human values at multiple levels. Using the Framework The first issue comes from the organizational level, where there is a major misalignment between businesses and individuals. Businesses that employ content moderation (YouTube, Facebook, Google) are incentivized to maximize shareholder value, which leads to prioritizing profit over social good. For example, Facebook does this by basing its algorithm on “engagement” – the more likes, comments and shares a topic or post receives, the more it will appear on people’s newsfeed. Per profile as well, Facebook can keep track of the user’s behavior and habits based on engagement to feed them what they want to see. This way, users will spend more time on the site and generate more ad revenue for the business to boost shareholder value. This however leads to echo chambers and polarization, as users are not exposed to opinions that differ from theirs, ultimately affecting not only individuals and families, but also entire nations, and even global discourse. The misalignment between organization and individuals has already proven to be dangerous with cases like Myanmar’s attack on minorities illustrating the potential consequences . National regulations shape how organizations moderate content, as organizations must build AI within the bounds of these regulations. A country’s content moderation legislation is typically an expression of the cultural values of the majority of its citizens, which is often similar to the cultural values of its leadership, though not always. While these regulations are made by individual lawmakers and may express the values of many individual citizens, these regulations also will affect both organizations and other individuals. For example, a common good perspective might lean towards high content moderation for the sake of minimizing social harm, but at the expense of individual freedom of expression. The question then arises regarding the alignment of cultural values with AI content moderation. We may be able to recognize where there are misalignments between national and organizational values, which in turn affects individuals. For example, in the US, where individual freedoms is a priority, there is very little content moderation regulation and it requires companies such as Facebook to only moderate things such as illegally sharing copyrighted content and criminal activity such as sharing child sexual abuse materials. Therefore, while Facebook is complying with every relevant government regulation, there have nevertheless been harmful effects on society, showing how the US government content moderation legislation is not aligned with societal needs. Cases like Myanmar also suggest that this American legislation may not be aligned with global needs, as other countries are subject to these same problems and facing the repercussions of it. Based on the above, it might seem that the first goal for AI alignment would be to align the national and organizational levels (assuming that the organization is also aligned with individual well-being). However, this is not enough – we must also consider whether these national values are aligned on the global level, that is, whether they support global human flourishing. The effects flow in both directions. Organizations doing content moderation sometimes respond most to individual user feedback, a powerful enough organization can have a hand in swaying national interests, and a nation or group of nations can potentially change the course of human civilization. All in all, content moderation is a prime example of how value alignment is at work right now in society. It may not be feasible to align all four values at once, but with this framework we can identify some causes of these complex misalignments. Conclusion If we are to make any progress on the normative side of AI alignment, we must consider all levels – individual, organizational, national, and global, and understand how each works together, rather than only aligning one or a few of the parts. Here we have presented a framework for considering these issues. The versatility of the framework means that it can be applied to many other topics, including but not limited to autonomous vehicles, AI-assisted clinical decision support systems, surveillance, and criminal justice tools. In these hotly contested spaces with no clear answers, by breaking them down into these four levels, we are able to see the parts at play in order to create ethical and aligned AI. Perhaps then we can sleep easy knowing we’ll be safe from a Terminator in our distant future. Works Cited “Terminator 2: Judgment Day,” Carolco Pictures, 1991. Dario Amodei, et al., “Concrete Problems in AI Safety,” arXiv, 25 July 2016. For a brief previous presentation of this work, please see “A Framework for the AI Alignment Problem,” Student Showcase 2022, Markkula Center for Applied Ethics, Santa Clara University, May 17, 2022, minutes 50:15-54:20. For another take on the problem see the “multiscale alignment” section of Max Tegmark’s interview with the 80,000 Hours Podcast where he describes a similar sounding idea that he developed. Tegmark’s framework does not yet seem to be published, so we cannot know in exactly what ways his and our frameworks are similar or different. Robert Wiblin and Keiran Harris, “Max Tegmark on how a ‘put-up-or-shut-up’ resolution led him to work on AI and algorithmic news selection,” The 80,000 Hours Podcast, July 1st, 2022, minutes 1:13:13-1:51:01. “THE 17 GOALS - Sustainable Development Goals - the United Nations,” United Nations. Available at: “YouTube Community Guidelines enforcement,” Google. Paul Mozur, “A Genocide Incited on Facebook, With Posts From Myanmar’s Military,” The New York Times, 15 Oct. 2018. View and download: "A Multilevel Framework for the AI Alignment Problem" as a PDF.
https://www.scu.edu/ethics/focus-areas/technology-ethics/resources/a-multilevel-framework-for-the-ai-alignment-problem/
Tech companies’ legal leaders and policy experts gathered at Santa Clara University School of Law on Feb. 2 for a day of panels on content moderation and removal. Tech companies’ legal leaders and policy experts gathered at Santa Clara University School of Law on Feb. 2 for a day of panels on content moderation and removal. The school’s High Tech Law Institute brought representatives from Facebook, Google, Reddit and others to discuss many facets of content moderation, including mental health, artificial intelligence and transparency. Here are some big takeaways from a day of discussion featuring some of Silicon Valley’s most famous companies: - Artificial Intelligence Can Remove Content Faster, but Not Better When it comes to content moderation, context is key. But, as Facebook Inc. public policy manager Neil Potts said Friday, ”Automation is not great at context yet.” At one panel session, “Humans vs. Machines,” Potts and other panelists agreed that while AI can be useful for black-and-white cases such as posts that endanger or exploit children, a human eye is needed to make most removal decisions accurately. “It’s hard to tell if a review is racist, or if the review is describing a company that was racist,” Yelp Inc. deputy GC Aaron Schur said, providing an example. “[That] requires a human eye for judgment.” Panelists also noted that changing laws—such as those in Europe, and particularly the U.K.—forcing sites to take down harmful content in a short period of time could force platforms to rely on less accurate, nonhuman content moderators. 2. Human Content Moderators Are Only Human Spending all day, every workday, looking at graphic images and disturbing posts online takes a toll on mental health. Human content moderators can get worn down from constant exposure to the worst parts of the internet, according to panelists at another session, “Employee/Contractor Hiring, Training and Mental Well-Being.” If moderators feel burnt out, their work may suffer. Panelists said their companies have a variety of ways to help moderators stay healthy, including counseling and massage therapy. “We have a lot of different wellness-oriented perks because we really believe that human moderators are the key to having high-quality moderation,” said Charlotte Willner, trust and safety manager of Pinterest. She recommended that companies “invest in their skill set, teach them to become familiar with this type of content, [and] invest in their long-term health.” 3. The Community Knows Best (Kind Of) During another session, on if and when to outsource moderation, panelists from Reddit, Wikimedia and Nextdoor offered similar advice—turn to the community first. They all said community self-regulation is their sites’ most common form of content moderation. “We let communities decide what the rules are for that community, decide what should go in and out there,” Reddit counsel Zac Cox said. “People who join [the] community can follow the rules and help enforce them by flagging content or commenting in a way that reinforces those norms.” Many platforms, such as Reddit and Yelp, include features that allow users to up-vote a post or mark it as useful. This is another form of self-moderation, Cox said, as it allows popular content to rise to the top and pushes content the community doesn’t want to see down. But most of the companies said they did have clear processes in place for escalating a content management situation that’s too large to be handled by the community alone, or isn’t being handled properly. 4. Be Transparent, but Don’t Help the Bots Companies have a fine line to walk with content moderation rules. The guidelines should be clear enough that users know the repercussions of abuse on the platform, but not so clear that users, or bots, can game the system, panelists at the event said. If users and bots can figure out that harmful content won’t be removed unless it contains a specific word or slur, according to panelists, they may continue posting disturbing content that doesn’t technically violate guidelines. When a post does get removed, Patreon Inc.’s head of legal Colin Sullivan said it’s crucial that moderators take the time to hear out users’ appeals and explain what happened. “Creators think we’re making a fair decision about them, so they feel like we’re making fair decisions in general,” Sullivan said. Sullivan and Medium’s head of legal Alex Feerst said transparency becomes more difficult when the user seems to be a bot. Alerting a bot that’s account has been shut down or has violated guidelines may push the bot to create another account. In this case, they said, it could be better to isolate the account and not inform the bot. 5. Diversity Is Key Diversity is important in every part of the company, and that holds true for trust and safety teams. Content moderators should be well-trained, but panelists at Friday’s event noted that having moderators from different backgrounds can lead to a better conversation about what is or isn’t harmful content. Feerst said Medium has rotations into the moderator role so that people from around the company can spend time doing trust and safety work. “Trust and safety and content moderation is a field that, if you don’t have gender and ethnic and other [types of diversity], you can’t do it,” Feerst said. “Because you don’t have enough perspective to generate the cultural competencies, and, most importantly, you don’t know what you don’t know.” Leave a Comment You must be logged in to post a comment.
https://teris.com/5-takeaways-from-tech-leaders-content-moderation-conference/
ALL USERS ARE INVITED TO CREATE CONTENT! please follow the steps here Welcome to the SAP Cloud Integration WIKI. Quickly find all How-Tos whitepapers and technical documents which includes step-by-step process. Disclaimer: Content Accuracy is assured as much as possible. Discretion advised Moderators: Ali Chalhoub | Mark Smyth | Marcelo Pinheiro WIKI Space Editor: Ali Chalhoub, Mark Smyth SCN Community Topic Pages: SAP Integration Suite Welcome to SAP Cloud Integration! Submit your content! - Click here to submit content (SAP Employees Only) - Please use Knowledge Management Template for all Submissions. - All new content to be created in Staging Area until Moderation Watch out! - Click here for RSS Feeds for updated content! Help!
https://wiki.scn.sap.com/wiki/display/SCI/SAP+Cloud+Integration+Home
Primary and secondary schools should: - continue to focus on raising standards of pupils’ independent and extended writing, giving close attention to content, expression and accuracy; - continue to raise pupils’ ability to read for information and use higher-order reading skills; - tackle the underperformance of pupils entitled to FSM in English, including for more able pupils, by targeting and matching support to their individual learning needs; - provide challenging work in English to stretch all pupils, particularly the more able; - agree how to teach spelling, punctuation and grammar and provide consistency in approaches, such as teaching spelling rules and strategies; - improve ‘assessment for learning’ practices and the marking of pupils’ work; - achieve a better balance of literary and non-literary material and cover all seven writing genres; - work with other schools to share effective standardisation and moderation practices; and - share more information to aid pupils’ transition to secondary school. In addition, secondary schools should: - improve the teaching of writing as a process by encouraging pupils to plan, review, edit and improve their own work; and - make more use of oracy prior to reading and writing, in order to help pupils to develop and extend their understanding and improve the quality of their work. The Welsh Government should: - improve the reliability and validity of teacher assessment by reviewing assessment criteria and introducing external moderation at key stage 2 and key stage 3.
https://www.estyn.gov.wales/thematic-reports/english-key-stages-2-and-3-june-2014
Banking Sector Q3 Earnings Review - Stress Settles Lower; Growth, Earnings, Return Profile Looking Up: ICICI Securities BQ Blue’s special research section collates quality and in-depth equity and economy research reports from across India’s top brokerages, asset managers and research agencies. These reports offer BloombergQuint’s subscribers an opportunity to expand their understanding of companies, sectors and the economy. ICICI Securities Report Our apprehensions, articulated in Q3 FY21 preview that Q3 earnings will be a true performance test for banks, were addressed positively with number print echoing affirmative narrative. 1. Stress recognition (in the form of pro-forma slippages of 2-5%, Special mention account-2 pool) and invoked restructuring (at less than 1%) settled within or below the guided range; 2. credit cost was contained much lower than anticipated leading to earnings upgrade; 3. robust current account savings account accretion, sharp decline in deposit cost and release of liquidity buffer more than offset any adverse impact of interest income reversal and credit-deposit ratio moderation, leading to stable-to-improving net interest margin profile. Click on the attachment to read the full report: DISCLAIMER This report is authored by an external party. BloombergQuint does not vouch for the accuracy of its contents nor is responsible for them in any way. The contents of this section do not constitute investment advice. For that you must always consult an expert based on your individual needs. The views expressed in the report are that of the author entity and do not represent the views of BloombergQuint. Users have no license to copy, modify, or distribute the content without permission of the Original Owner.
https://www.bloombergquint.com/research-reports/banking-sector-q3-earnings-review-stress-settles-lower-growth-earnings-return-profile-looking-up-icici-securities
Demonstrate understanding of moderation within the context of an outcomes-based assessment system. Moderation is explained in terms of its contribution to quality assured assessment and recognition systems within the context of principles and regulations concerning the NQF. A variety of moderation methods are described and compared in terms of strengths, weaknesses and applications. The descriptions show how moderation is intended to uphold the need for manageable, credible and reliable assessments. Key principles of assessment are described in terms of their importance and effect on the assessment and the application of the assessment results. Examples are provided to show how moderation may be effective in ensuring the principles of assessment are upheld. See “Definition of Terms” for a definition of assessment principles. Examples are provided to show how moderation activities could verify the fairness and appropriateness of assessment methods and activities used by assessors in different assessment situations. Assessment situations for gathering evidence of abilities in problem solving, knowledge, understanding, practical and technical skills, personal and attitudinal skills and values. Plan and prepare for moderation. The planning and preparation is to take place within the context of an existing moderation system, whether internal or external, as well as an existing assessment plan. Planning and preparation activities are aligned with moderation system requirements. The scope of the moderation is confirmed with relevant parties. Parties include the assessors and moderating bodies where these exist. Planning of the extent of moderation and methods of moderation ensures manageability of the process. Planning makes provision for sufficient moderation evidence to enable a reliable judgement to be passed on the assessments under review. The contexts of the assessments under review are clarified with the assessors or assessment agency, and special needs are taken into consideration in the moderation planning. Moderation methods and processes are sufficient to deal with all common forms of evidence for the assessments to be moderated, including evidence gathered for recognition of prior learning. The documentation is prepared in line with the moderation system requirements and in such a way as to ensure moderation decisions are clearly documented. Required physical and human resources are ensured to be ready and available for use. Logistical arrangements are confirmed with relevant role-players prior to the moderation. Moderation to address the design of the assessment, activities before, during and after assessment, and assessment documentation. Moderation to include assessments of candidates with special needs and for RPL cases. Where assessments do not include special needs or RPL cases, evidence for this may be produced through scenarios. Evidence must be gathered for on-site and off-site moderation. – The moderation process finds it cannot uphold the assessment results. The moderation is conducted in accordance with the moderation plan. Unforeseen events are handled without compromising the validity of the moderation. The assessment instruments and process are checked and judged in terms of the extent to which the principles of good assessment are upheld. See “Definitions of Terms” for definitions of assessment principles. Moderation confirms that special needs of candidates have been provided for but without compromising the requirements specified in the relevant outcome statements. The proportion of assessments selected for checking meets the quality assurance body’s requirements for consistency and reliability. The use of time and resources is justified by the assessment history or record of the assessors and/or assessment agency under consideration. Appeals against assessment decisions are handled in accordance with organisational appeal procedures. The moderation decision is consistent with the quality assurance body’s requirements for fairness, validity and reliability of assessments to be achieved. The “moderation decision” includes agreement or disagreement with the results of the assessments. requirements include the interpretation of assessment criteria and correct application of assessment procedures. The nature and quality of advice facilitates a common understanding of the relevant outcomes and criteria, and issues related to their assessment by assessors. The nature and quality of advice promotes assessment in accordance with good assessment principles and enhances the development and maintenance of quality management systems in line with ETQA requirements. Advice on quality management systems includes planning, staffing, resourcing, training and recording systems. Support contributes towards the further development of assessors as needed. All communications are conducted in accordance with relevant confidentiality requirements. Report, record and administer moderation. Moderation findings are reported to designated role-players within agreed time-frames and according to the quality assurance body’s requirements for format and content. Role-players could include ETQA or Moderating Body personnel, internal or external moderators and assessors. Records are maintained in accordance with organisational quality assurance and ETQA requirements. Confidentiality of information relating to candidates and assessors is preserved in accordance with organisational quality assurance and ETQA requirements. Strengths and weaknesses of moderation systems and processes are identified in terms of their manageability and effectiveness in facilitating judgements on the quality and validity of assessment decisions. Recommendations contribute towards the improvement of moderation systems and processes in line with ETQA requirements and overall manageability. The review enhances the credibility and integrity of the recognition system. ‘Moderation is explained in terms of its contribution to quality assured assessment and recognition systems within the context of principles and regulations concerning the NQF.’ and indirectly assessed throughout the unit standard.
http://www.capetown.trainyoucan.co.za/accredited-courses/moderator-2/saqa-id-115759/
The rush to bring law and order to online spaces is well and truly on. Two important documents on the topic of online speech regulation have come out of Paris in the past week alone. The first is a French government-commissioned report exploring a “general framework for the regulation of social networks.” The mission team that wrote the report spent two months working with representatives of Facebook, which the French government hailed as “unprecedented collaboration with a private operator.” This interim report to the French secretary of state for digital affairs, and the final report due by June 30, will inform the French government’s and European regulatory response to the increasingly pervasive problems of content moderation on social media platforms. The second document is a nonbinding compact, named “The Christchurch Call to Action,” signed by 18 governments and eight companies so far and unveiled in Paris on May 15. The pledge calls for urgent action to “eliminate terrorist and violent extremist content online” in the wake of the livestreamed Christchurch attacks on March 15. The two are very different documents. The first is a cautious survey of the thorny issue of how to manage government regulation of speech in the new platform era; the second is a high-level pledge to prevent the kinds of abuse of an open internet that occurred when the Christchurch shooter broadcast his massacre in a way designed to go viral. But they are both evidence of the growing momentum of moves to regulate the major tech platforms. The French Government Report The French report is notably more measured compared to other recent government forays into this area, such as a recent U.K. report on online harms and Australian legislation that criminalized the hosting of “abhorrent violent material.” Rather than blaming social networks for abuses, the report frames the problem in terms of abuses committed by isolated individuals or organized groups to which social networks have failed to provide an adequate response. Though the report states that this justifies government intervention, it is careful to acknowledge the civil liberties at stake—in particular, the need for government regulation of speech to be minimal, necessary and proportionate in order to comply with human rights obligations. It notes that public intervention in this area requires “special precautions” given the “risk of the manipulation of information by the public authorities.” This is a welcome acknowledgment of one of the trickiest issues when it comes to regulating tech platforms: unaccountable private control of important public discourse is problematic, but so too is government control. Extensive governmental regulations might be a cure worse than the disease. As an interim report, the document leaves many details to be filled out later. But it bases its initial proposal for regulation around five pillars, the gist of which is as follows: - Public regulation guaranteeing both individual freedoms as well as platforms’ entrepreneurial freedom and the preservation of a diversity of platform models. - An independent regulatory body charged with implementing a new prescriptive regulation that focuses on the accountability of social networks, based around three obligations: - Algorithmic transparency; - Transparency of Terms of Service and content moderation systems; and - An obligation to “defend the integrity of users,” analogous to a “duty of care” to protect users from abuse by attempts to manipulate the platform. - Greater dialogue between stakeholders including platforms, the government and civil society. - An independent administrative authority that is open to civil society, and does not have jurisdiction to regulate online content directly but does have power to enforce transparency and issue fines up to four percent of a platform’s global turnover. - European cooperation. The key and most interesting pillar is the second one, which describes the approach to regulation of platforms. The report proposes a model it calls “accountability by design,” which seeks to capitalize on the self-regulatory approach already being used by platforms “by expanding and legitimising” that approach. It notes that while self-regulation has the benefit of allowing platforms to come up with “varied and agile solutions,” the current system suffers from a severe legitimacy and credibility deficit. The main reason for this is what the report tactfully calls “the extreme asymmetry of information” between platforms on the one hand and government and civil society on the other. Others have called it “the logic of opacity,” where platforms intentionally obscure their content moderation practices to avoid criticism and accountability. To an extent, this tactic has worked—the French government report notes that without adequate information, observers are reduced to highlighting individual examples of poorly moderated content and are unable to prove systemic failures. For this reason, the report emphasizes that future regulation needs to enforce greater transparency both of moderation systems and of algorithms. The report then discusses the balance between a “punitive approach,” the dominant model adopted by countries so far—which focuses on imposing sanctions for those who post unlawful content as well as the platforms that host it—and “preventative regulation.” In what is likely a reference to heavily criticized German laws, the report notes that the punitive approach incentivizes platforms to over-censor content to avoid liability and therefore “does not seem to be a very satisfactory solution.” It therefore recommends a preventive, “compliance approach” that focuses on creating incentives for platforms to create systems to prevent content moderation failures. The report draws an analogy to the financial sector: Regulators do not punish financial institutions that have been used for unlawful behavior such as money laundering but, instead, punish those that fail to implement a prescribed prevention measure, whether or not this actually leads to unlawful behavior. The report recommends implementing this approach “progressively and pragmatically” depending on the size of operators and their services, with more onerous obligations for larger platforms. This is a careful acknowledgment of the need to avoid entrenching the dominance of the major platforms by imposing compliance burdens that only the most well-resourced companies can meet. To date, most discussion of Facebook’s proposed Oversight Board for content moderation has treated the project as a quirky (if promising) experiment. But laws that require platforms to show good-faith efforts to create a robust content moderation system may encourage the use of mechanisms like this, which perform a kind of quality-assurance and transparency-forcing function for a platform’s greater content moderation ecosystem. Facebook’s Board as well as other measures, such as increased transparency and human resources, are cited by the report as evidence of the progress made by Facebook in the past 12 months that convinced the authors of the benefits of the self-regulatory model. Overall, the report reflects a nuanced attempt to grapple with the difficult issues involved in preventing the harm caused by bad actors on social media while adequately protecting freedom of expression. The benefits of the French government’s collaboration with Facebook are evident throughout the report, which notes that the mission learned about the wide range of possible responses to toxic content used by Facebook and focuses on the need to make these more credible through transparency rather than by fundamental rethinking. This contrasts with the adversarial relationship between Facebook and the U.K. government: CEO Mark Zuckerberg refused three invitations to appear before a U.K. parliamentary committee, which responded by calling the company a “digital gangster” in its final report. It is also a contrast with the Australian approach, which creates vague but severe liability in a way that does not seem to appreciate the technical challenges involved and endangers free expression. The Christchurch Call Slightly more rushed was the Christchurch Call, spearheaded by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron. Although nonbinding, the Call reflects the newfound urgency felt by governments to address terrorist and violent extremist content online in the wake of the horror in Christchurch. The Christchurch Call also opens by acknowledging the importance of a free, open and secure internet and respect for freedom of expression. The actions pledged by governments are all very vague and broad, including countering terrorism and extremism through education, enforcing existing laws and supporting responsible reporting. As to future laws, governments commit to consider “[r]egulatory or policy measures consistent with a free, open and secure internet and international human rights law” that will prevent the online dissemination of terrorist and violent extremist content. This could hardly be stated more open-endedly. These vague pledges show that the Christchurch Call is the start of the conversation, not the end. Indeed, the last line of the Call is a list of upcoming forums, including the G-7 and G-20 summits, where the Call will be discussed further. Similarly, online service providers commit to taking “transparent, specific measures” to seek to prevent upload of such content and its dissemination, “without prejudice to law enforcement and user appeals requirements, in a manner consistent with human rights and fundamental freedoms.” The suggested measures are equally high level: “technology development, the expansion and use of shared databases of hashes and URLs, and effective notice and takedown procedures.” Service providers also commit to greater transparency, enforcing their terms of service, mitigating the risk posed by livestreaming, reviewing algorithms that may amplify this harmful content and greater cross-industry coordination. Microsoft, Twitter, Facebook, Google and Amazon all signed the pledge and released a joint statement and nine-point action plan in support. The nine points are all indisputably desirable but, like the Call itself, vague: “enhancing technology,” “education,” and “combating hate and bigotry” are hard to disagree with but could all mean any number of things. Just before the Call, Facebook also announced updated restrictions on livestreaming, including specified lockouts of livestreaming for violations of terms of service, which the company says would have prevented the Christchurch shooter from broadcasting his crimes live. The point on “enhancing technology” will be particularly important. I have previously written about how Facebook’s failure to remove the Christchurch shooting footage in the immediate aftermath of the attack was not for lack of trying but, instead, was due to evasive actions by those wishing to spread it and other technological challenges that became apparent only during the unprecedented events. The current state of detection tools risks both over- and under-removal: Some civil society groups expressed concerns that there is no way for current tools to automatically remove terrorist content in a “rights-respecting way,” given these tools are likely to remove a large amount of legitimate speech and have built-in biases. At the same time, the day after the Call, a researcher still found copies of the video on both Facebook and Instagram. Tech sector sign-on to the Call is not surprising. The joint statement echoes points made by Microsoft President Brad Smith in the days following the Christchurch attack, who wrote in a blog post titled “A Tragedy That Calls for More Than Words” that the tech sector needed to learn from and “take new action” based on what happened in Christchurch. It was clear even then that the way the horror played out online would be a moment of reckoning for the industry. Both Smith and the Call look to investing in and expanding the Global Internet Forum to Counter Terrorism (GIFCT). The GIFCT is an industry-created body that keeps a database of “hashed” files: Companies can upload to the shared database files they identify as terrorist content, which is then given a digital fingerprint so that other participants can automatically identify if a user tries to upload it on their platform. But as Emma Llansó has argued, the database has long-standing problems: No one outside of the consortium of companies knows what is in the database. There are no established mechanisms for an independent audit of the content, or an appeal process for removing content from the database. People whose posts are removed or accounts disabled on participating sites aren't even notified if the hash database was involved. So there's no way to know, from the outside, whether content has been added inappropriately and no way to remedy the situation if it has. In calling for the use of the GIFCT to be expanded, the Christchurch Call is seemingly calling for the entrenchment of a body that goes against the very kind of “accountability by design” principles laid out in the French government report, although the Call does seek greater transparency in general. Notably absent from the signatories of the Christchurch Call was the United States, reportedly due to concerns that the commitments described in the Call might run afoul of the First Amendment. It is true, as representatives from civil society have noted, that the definition of “terrorism and violent extremism” is a vague category, which may be open to abuse by governments seeking to clamp down on civic space. However, the broad wording and voluntary nature of the Call would not have compelled the U.S. to restrict any First Amendment-protected speech. When “considering” future regulations, the U.S. could have based them around First Amendment doctrine. This has led some observers to express disappointment that the United States did not sign the Call. At the same time, the United States is an international outlier when it comes to freedom of speech: Current doctrine undoubtedly does protect most terrorist and extremist content, and while private platforms can choose to remove this material from their services, the U.S. government often cannot mandate that they do so. The American refusal to sign the pledge is consistent with its behavior toward global treaties implicating speech rights since the founding of the United Nations. The U.S. famously has a reservation on First Amendment grounds to the International Covenant on Civil and Political Rights article prohibiting propaganda for war and national, racial or religious incitement to discrimination or violence. Long-running attempts by states—most prominently including the Soviet Union—to create a treaty outlawing propaganda in the aftermath of World War II were always opposed by the U.S. for the same reasons. And the government is not alone in expressing concerns about the Call’s implications for freedom of expression: Members of civil society also worry that it seeks to push censorship too far into the infrastructure layer of the internet; that legitimate speech (including reporting and evidence of crimes) will be swept up by censorship efforts; and that it will impede efforts for counterspeech, research and education. Regulation Is Coming The upshot of the two Parisian documents is clear: Regulation is coming to online spaces. The French government report represents a bottom-up approach, presenting findings after a long period of work with a platform on the ground to learn about how content moderation works in practice. The Christchurch Call comes from the opposite direction, starting with high-level goals and calling for urgent solutions. But both are responding to the same reality: There is a lot of vile content online, and the current approaches to dealing with it are inadequate. Changes are in the near future.
https://www.lawfareblog.com/two-calls-tech-regulation-french-government-report-and-christchurch-call
Explanation: Are lasers from giant telescopes being used to attack the Galactic centre? No. Lasers shot from telescopes are now commonly used to help increase the accuracy of astronomical observations. In some sky locations, Earth atmosphere-induced fluctuations in starlight can indicate how the air mass over a telescope is changing, but many times no bright star exists in the direction where atmospheric information is needed. In these cases, astronomers create an artificial star where they need it -- with a laser. Subsequent observations of the artificial laser guide star can reveal information so detailed about the blurring effects of the Earth's atmosphere that much of this blurring can be removed by rapidly flexing the mirror. Such adaptive optic techniques allow high-resolution ground-based observations of real stars, planets, and nebulae. Pictured above, four telescopes on Mauna Kea, Hawaii, USA are being used simultaneously to study the centre of our Galaxy and so all use a laser to create an artificial star nearby.
http://zuserver2.star.ucl.ac.uk/~apod/apod/ap140623.html
The Earth's turbulent atmosphere blurs the images acquired with ground-based telescopes. In principle, larger telescopes have smaller diffraction limits and can resolve the finer details of astronomical objects. In practice, while a large telescope does reap the benefits of collecting more light, it can resolve details no better than a backyard 8-inch diameter telescope. Placing telescopes at high altitude, such as on Mauna Kea in Hawaii, can reduce the atmospheric blurring, but does not eliminate it. One solution is to put telescopes in space, but there are limits to how large space telescopes can be and how many can be launched. To take full advantage of large ground-based telescopes, one must use another approach. Adaptive optics is a technology where the atmospheric turbulence is measured using either a natural star or a laser beacon and corrected in real time to "untwinkle" the stars and generate diffraction limited images. Elinor will discuss how adaptive optics is implemented at Lick Observatory and the new technologies that they are testing for the next generation of adaptive optics instruments and giant telescopes. Additional information Our work is centered around a series of Focus Areas that we believe are the future of science and technology. We’re continually developing new technologies, many of which are available for Commercialization. PARC scientists and staffers are active members and contributors to the science and technology communities.
https://www.parc.com/events/untwinkling-the-stars-improving-our-view-of-the-universe-with-laser-guide-star-adaptive-optics/
The Very Large Telescope consists of four telescopes with 8.2-meter (27-foot) mirrors in northern Chile’s Atacama Desert. Today, scientists at the observatory have released the first observations taken with laser tomography, the new adaptive optics mode on its GALACSI unit, which works alongside a spectrograph instrument called MUSE on one of the telescopes. Essentially, the Earth’s atmosphere distorts the appearance of things in space, causing stars to twinkle and blurring distant objects. If you want to observe from the ground, you have to find a way to correct for the blur, and if you want to know how much blur to correct for, you need to a reference point. The VLT has a facility that shines four bright lasers into space, creating a fake star in the night sky. It then uses the blur on the laser to inform a computer-controlled mirror that constantly changes shape. This corrects for the atmosphere’s effects so that MUSE can take a crisper picture. There are two adaptive optics modes—narrow-field mode, which can image small points of the sky with high precision, and wide-field mode, which can image larger parts of the sky but only correct for a kilometer-thick swath of atmosphere distortion, according to an ESO press release. Having sharp images of things in space is important if you want to study what planets, stars, nebulae, and other things are made of and how they formed. The ESO will continue updating its fleet of instruments to get better and better resolution. But for now, dang—we’re impressed.
https://gizmodo.com/new-super-crisp-images-of-neptune-show-how-far-our-tele-1827683475
Meandering stellar jets meander lazily through a star field in new images captured from Chile by the Gemini International Observatory, a program of NSF’s NOIRLab. The gently curved stellar jets are the outflow of young stars, and astronomers suspect that their sideways appearances are caused by the gravitational pull of companion stars. These crystal-clear observations were made using the Gemini South telescope’s adaptive optics system, which helps astronomers counteract the hazy effects of atmospheric turbulence. Young stellar jets are a common by-product of star formation and are thought to be caused by the interaction between the magnetic fields of rotating young stars and the discs of gas around them. These interactions eject twin torrents of ionized gas in opposite directions, like those shown in two images captured by astronomers using the Gemini South telescope on Cerro Pachón on the edge of the Chilean Andes. Gemini South is one half of the Gemini International Observatory, a program of NSF’s NOIRLab, which includes twin 8.1-meter optical/infrared telescopes at two of the best observing sites on the planet. Its counterpart, Gemini North, is located near the summit of Maunakea in Hawai’i. The jet in the first image, named MHO 2147, is about 10,000 light-years from Earth and lies within the galactic plane of the Milky Way, near the boundary between the constellations of Sagittarius and Ophiuchus. MHO 2147 snakes against a starry background in the image — a serpentine appearance appropriate for an object close to Ophiuchus. Like many of the 88 modern astronomical constellations, Ophiuchus has mythological roots – in ancient Greece it represented a variety of gods and heroes wrestling with a serpent. MHO 1502, the jet shown in the second image, is located in the constellation of Vela, about 2000 light years away. Most star jets are straight but some can be stray or knotted. The shape of the jagged jets is thought to be related to a characteristic of the object or objects that created them. In the case of the two bipolar jets MHO 2147 and MHO 1502, the stars that created them are occulted. In the case of MHO 2147, this central young star, which bears the catchy identifier IRAS 17527-2439, is embedded in an infrared black cloud — a region of cool, dense gas that is opaque to the infrared wavelengths shown in this image. . The curvy shape of MHO 2147 is due to the direction of the jet changing over time, tracing a gentle curve on either side of the central star. These almost uninterrupted curves suggest that MHO 2147 was sculpted by continuous emission from its central source. Astronomers have discovered that the change in direction (precession) of the jet may be due to the gravitational influence of nearby stars acting on the central star. Their observations suggest that IRAS 17527-2439 may belong to a system of triple stars separated by more than 300 billion kilometers (nearly 200 billion miles). MHO 1502, on the other hand, is embedded in a totally different environment – a star forming area known as the HII region. The bipolar jet is made up of a string of nodes, suggesting that its source, thought to be two stars, emits material intermittently. These detailed images were captured by the Gemini South Adaptive Optics Imager (GSAOI), an instrument of the 8.1-meter-diameter Gemini South Telescope. Gemini South is perched atop Cerro Pachón, where dry air and negligible cloud cover make for one of the best viewing spots on the planet. Even atop Cerro Pachón, however, atmospheric turbulence causes the stars to blur and twinkle. GSAOI works with GeMs, the Gemini Multiconjugate Adaptive Optics System, to cancel out this blurring effect using a technique called adaptive optics. By monitoring the twinkling of natural and man-made guide stars up to 800 times per second, GeMs can determine how atmospheric turbulence is distorting Gemini South observations . A computer uses this information to fine-tune the shape of the deformable mirrors, canceling out the distortions caused by turbulence. In this case, the sharp adaptive optics images allowed more detail to be recognized in each node of the young stellar jets than in previous studies. Remarks Astronomical objects can appear very different at different wavelengths. For example, the dust surrounding newborn stars blocks visible light but is transparent to infrared wavelengths. Something similar also happens here on Earth – doctors can see right through you with an x-ray machine even though human bodies are not transparent to visible wavelengths. Astronomers therefore study the Universe across the electromagnetic spectrum to learn as much as possible about the Universe. Adaptive optics systems on telescopes often use “natural guide stars”, which are bright stars located close to the target of an astronomical observation. Their brightness makes it easy to measure how much atmospheric turbulence distorts their appearance. Gemini South also uses artificial guide stars produced by shooting powerful lasers into the upper atmosphere.
https://4cnv.in/crystal-clear-images-of-meandering-bipolar-stellar-jets-of-young-stars-captured-with-adaptive-optics/
Taking advantage of advanced techniques to correct distortions caused by Earth's atmosphere, astronomers used the NSF-supported Gemini Observatory to capture the first images of clouds over the tropics of Titan, Saturn's largest moon. The images clarify a long-standing mystery linking Titan's weather and surface features, helping astronomers better understand the moon of Saturn, viewed by some scientists as an analog to Earth when our planet was young. The effort also served as the latest demonstration of adaptive optics, which use deformable mirrors to enable NSF's suite of ground-based telescopes to capture images that in some cases exceed the resolution of images captured by space-based counterparts. Emily Schaller from the University of Hawai'i, Henry Roe from Lowell Observatory, and Tapio Schneider and Mike Brown, both of Caltech, reported their findings in the Aug. 13, 2009, issue of Nature. "Adaptive optics are helping our ground-based telescopes accomplish feats that have until now been capable only with telescopes in space," said Brian Patten, a program director in NSF's Astronomy Division. "Now, we can remove the affects of the atmosphere, capturing images that in some cases exceed the resolution of those captured by space-based telescopes. Investments in adaptive optics technology are really starting to pay off." On Titan, clouds of light hydrocarbons, not water, occasionally emerge in the frigid, dense atmosphere, mainly clustering near the poles, where they feed scattered methane lakes below. Closer to the moon's equator, clouds are rare, and the surface is more similar to an arid, wind-swept terrain on Earth. Observations by space probes suggest evidence for liquid-carved terrain in the tropics, but the cause has been a mystery. Regular monitoring of Titan's infrared spectrum suggests clouds increased dramatically in 1995 and 2004, inspiring astronomers to watch closely for the next brightening, an indicator of storms that could be imaged from Earth. Schaller and her colleagues used NASA's Infrared Telescope Facility (IRTF), situated on Hawaii's Mauna Kea, to monitor Titan on 138 nights over a period of two years, and on April 13, 2008, the team saw a tell-tale brightening. The researchers then turned to the NSF-supported Gemini North telescope, an 8-meter telescope also located on Mauna Kea, to capture the extremely high-resolution infrared snapshots of Titan's cloud cover, including the first storms ever observed in the moon's tropics. The team suggests that the storms may yield precipitation capable of feeding the apparently liquid-carved channels on the planet's surface, and also influenced weather patterns throughout the moon's atmosphere for several weeks. Read more in the Gemini press release at http://www.gemini.edu/pio/pr2009-5.php, the Lowell Observatory press release at http://www.lowell.edu/media/releases.php, and the University of Hawaii press release at http://www.ifa.hawaii.edu/info/press-releases/SchallerTitanAug09/. View a video of astronomers Henry Roe and Mike Brown discussing recently announced observations of storm clouds in the tropics of Titan here: http://www.nsf.gov/news/news_videos.jsp?cntn_id=115388&media_id=65496&org=NSF For additional images and researcher contacts, see http://www.nsf.gov/news/news_summ.jsp?cntn_id=115388. Joshua Chamot | EurekAlert! Further information: http://www.nsf.gov Further reports about: > Earth's magnetic field > Gemini > Gemini North telescope > Observatory > Titan > ground-based telescopes > space-based telescopes > storm clouds > tropics of Titan > wind-swept terrain on Earth Airborne thermometer to measure Arctic temperatures 11.01.2017 | Moscow Institute of Physics and Technology Next-generation optics offer the widest real-time views of vast regions of the sun 11.01.2017 | New Jersey Institute of Technology Among the general public, solar thermal energy is currently associated with dark blue, rectangular collectors on building roofs. Technologies are needed for aesthetically high quality architecture which offer the architect more room for manoeuvre when it comes to low- and plus-energy buildings. With the “ArKol” project, researchers at Fraunhofer ISE together with partners are currently developing two façade collectors for solar thermal energy generation, which permit a high degree of design flexibility: a strip collector for opaque façade sections and a solar thermal blind for transparent sections. The current state of the two developments will be presented at the BAU 2017 trade fair. As part of the “ArKol – development of architecturally highly integrated façade collectors with heat pipes” project, Fraunhofer ISE together with its partners... At TU Wien, an alternative for resource intensive formwork for the construction of concrete domes was developed. It is now used in a test dome for the Austrian Federal Railways Infrastructure (ÖBB Infrastruktur). Concrete shells are efficient structures, but not very resource efficient. The formwork for the construction of concrete domes alone requires a high amount of... Many pathogens use certain sugar compounds from their host to help conceal themselves against the immune system. Scientists at the University of Bonn have now, in cooperation with researchers at the University of York in the United Kingdom, analyzed the dynamics of a bacterial molecule that is involved in this process. They demonstrate that the protein grabs onto the sugar molecule with a Pac Man-like chewing motion and holds it until it can be used. Their results could help design therapeutics that could make the protein poorer at grabbing and holding and hence compromise the pathogen in the host. The study has now been published in “Biophysical Journal”. The cells of the mouth, nose and intestinal mucosa produce large quantities of a chemical called sialic acid. Many bacteria possess a special transport system... UMD, NOAA collaboration demonstrates suitability of in-orbit datasets for weather satellite calibration "Traffic and weather, together on the hour!" blasts your local radio station, while your smartphone knows the weather halfway across the world. A network of... Fiber-reinforced plastics (FRP) are frequently used in the aeronautic and automobile industry. However, the repair of workpieces made of these composite materials is often less profitable than exchanging the part. In order to increase the lifetime of FRP parts and to make them more eco-efficient, the Laser Zentrum Hannover e.V. (LZH) and the Apodius GmbH want to combine a new measuring device for fiber layer orientation with an innovative laser-based repair process. Defects in FRP pieces may be production or operation-related. Whether or not repair is cost-effective depends on the geometry of the defective area, the tools... Anzeige Anzeige 12V, 48V, high-voltage – trends in E/E automotive architecture 10.01.2017 | Event News 2nd Conference on Non-Textual Information on 10 and 11 May 2017 in Hannover 09.01.2017 | Event News Nothing will happen without batteries making it happen!
http://www.innovations-report.com/html/reports/physics-astronomy/storm-clouds-titan-137826.html
This is a question about something I read in a book first written back in the 50's and revised since, about telescopes Standard Handbook for Telescope Making, on page 12 it said; “It seems practically impossible to cast a glass disk of over 200 inches that will not crack or become otherwise distorted during the cooling period. But the advance of scientific knowledge may come up with an answer to this problem, as it has to others where the problem can be recognized and identified. If this is to happen, the world may one day see a 30-foot reflector. Such a day will be an exiting one in astronomical circles, for a telescope of this size is potentially capable of revealing the disk of a star other than the sun, something man has never seen. All telescopes now in use can do no more than record a star as a point of light, even though there are other means of determining it's size and distance.” To me, this was the most important comment in the whole book and I've always remembered it almost word for word for over the more than 20 years ago when I first read it. My question is, “Is this true, a 30-foot reflector should reveal the actual optical disk of another star than our sun's? This seems like a big deal, and should be one of those great, scientific milestones if accomplished. If so, why haven't we seen this accomplished yet, seems like an important precursor to photographing actual planetary bodies around foreign stars? Segmented, computer driven telescope technology has brought us telescopes larger than 30 feet, haven't they? How interesting that you've remembered this prediction for so long! The answers are yes – yes, we have telescopes that large; yes, we've seen the disks of other stars, and yes, we're even directly imaging planets now. Why can Hubble succeed with such a small mirror? Well, it turns out that the real killer in the past has not been the size of the mirror but the blurring effects of the atmosphere. (Think about looking at a penny lying at the bottom of a pool on a windy day: as the water moves, the image of the penny gets distorted and moves around.) By going above Earth's atmosphere, Hubble can avoid that problem, and get very high-resolution images – such high resolution images, in fact, that it has even recently directly imaged a large planet around Formalhaut, the brightest star in the southern constellation of Piscis. (See http://scienceblogs.com/catdynamics/2008/11/direct_imaging_of_extrasolar_p.php for more information.) Of course, the planet shows up only as a tiny point of light. As mentioned on that site, today ground-based telescopes are competing with Hubble by using Adaptive Optics, a complicated engineering feat that allows us to correct for most of the distortions from the atmosphere. Using this, large telescopes like Keck Observatory are able to take high-resolution images as well. In case 10-meter class telescopes don't sound big enough, there are currently three 30-meter class telescopes in development for the next decade. These will also be outfitted with Adaptive Optics systems.
http://www.ucolick.org/~mountain/AAA/aaawiki/doku.php?id=can_we_reveal_the_actual_optical_disk_of_another_star_than_our_sun_s&amp;do=revisions
For the first time, astronomers have been able to combine the deepest optical images of the universe, obtained by the Hubble Space Telescope, with equally sharp images in the near-infrared part of the spectrum using a sophisticated new laser guide star system for adaptive optics at the W. M. Keck Observatory in Hawaii. The new observations, presented at the American Astronomical Society (AAS) meeting in San Diego this week, reveal unprecedented details of colliding galaxies with massive black holes at their cores, seen at a distance of around 5 billion light-years. Observing distant galaxies in the infrared range reveals older populations of stars than can be seen at optical wavelengths, and infrared light also penetrates clouds of interstellar dust more readily than optical light. The new infrared images of distant galaxies were obtained by a team of researchers from the University of California, Santa Cruz, UCLA, and the Keck Observatory. Jason Melbourne, a graduate student at UC Santa Cruz and lead author of the study, said the initial findings include some surprises and that researchers will continue to analyze the data in the weeks to come. 'This is very exciting, because we have never been able to achieve this level of spatial resolution in the infrared before,' Melbourne said. In addition to Melbourne, the research team, led by David Koo of UCSC and James Larkin of UCLA, includes Jennifer Lotz, Claire Max, and Jerry Nelson at UCSC; Shelley Wright and Matthew Barczys at UCLA; and Antonin H. Bouchez, Jason Chin, Scott Hartman, Erik Johansson, Robert Lafon, David Le Mignant, Paul J. Stomski, Douglas Summers, Marcos A. van Dam, and Peter L. Wizinowich at Keck Observatory. 'For the first time now in these deepest images of the universe we can cover all wavelengths of light from the optical to the infrared with the same level of spatial resolution, which allows us to observe detailed substructures in distant galaxies and study their constituent stars with a precision we couldn't possibly obtain otherwise,' said Koo, a professor of astronomy and astrophysics at UCSC. The images were obtained by Wright and the Keck AO team during testing of the laser guide star adaptive optics system on the 10-meter Keck II Telescope. They are the first science-quality images of distant galaxies obtained with the new system. This marks a major step for the Center for Adaptive Optics Treasury Survey (CATS), which will use adaptive optics to observe a large sample of faint, distant galaxies in the early universe, said UCLA's Larkin. 'We've worked very hard for several years taking data around bright stars. But we have been very restricted in terms of the number and types of objects that we can observe. Only with the laser can we now reach the richest and most exciting targets, especially those with beautiful optical images from the Hubble Space Telescope,' Larkin said. Adaptive optics (AO) corrects for the blurring effect of the atmosphere, which seriously degrades images seen by ground-based telescopes. An AO system precisely measures this blurring and corrects the image using a deformable mirror, applying corrections hundreds of times per second. To measure the blurring, AO requires a bright point-source of light in the telescope's field of view, which can be created artificially by using a laser to excite sodium atoms in the upper atmosphere, causing them to glow. Without such a laser guide star, astronomers have had to rely on bright stars ('natural guide stars'), which drastically limits where AO can be used in the sky. Furthermore, natural guide stars are too bright to allow observations of very faint, distant galaxies in the same part of the sky, Koo said. 'The advent of the laser guide star at Keck has opened up the sky for adaptive optics observations, and we can now use Keck to focus on those fields where we already have wonderful, deep optical images from the Hubble Space Telescope,' Koo said. Because the diameter of the Keck Telescope's mirror is four times larger than Hubble's, it can obtain images four times sharper than Hubble in the near infrared now that the laser guide star adaptive optics system is available to overcome the blurring effects of the atmosphere. The images being presented at the AAS meeting were obtained in an area of the sky known as the GOODS-South field, where deep observations have already been made by Hubble, the Chandra X-ray Observatory, and other telescopes. There are six faint galaxies in the images, including two x-ray sources identified by Chandra. The x-ray emissions, combined with the disordered morphology of these objects, suggested recent merger activity, Melbourne said. Mergers can funnel large amounts of matter into the center of a galaxy, and x-ray emissions from the galactic center indicate the presence of a massive black hole that is actively consuming matter. 'We are now fairly certain that we are seeing galaxies that have undergone recent mergers,' Melbourne said. 'One of these systems has a double nucleus, so you can actually see the two nuclei of the merging galaxies. The other system is highly disordered--it looks like a train wreck--and is a much stronger x-ray source.' In addition to lighting up the galactic nucleus with x-ray emissions, mergers also tend to trigger the formation of new stars by shocking and compressing clouds of gas. So the researchers were surprised to find that the system with a double nucleus is dominated by relatively old stars and does not appear to be producing many young stars. 'If we are right about the merger scenario, then this merger is occuring between two galaxies that had already formed most of their stars billions of years before and did not have a lot of gas left over to make new stars,' Melbourne said. If additional study shows that such objects are common further back in time, these observations could help explain one of the puzzles of galaxy formation. According to the prevailing theory of hierarchical galaxy formation, large galaxies are built up over billions of years through mergers between smaller galaxies. Since mergers trigger star formation, it has been difficult to explain the existence of very large galaxies that lack significant populations of young stars. 'One idea is that you can have a so-called dry merger, where two galaxies full of old stars but little gas merge without forming many new stars. What we are seeing in this object is consistent with a dry merger,' Melbourne said. 'Even in a dry merger, there may still be enough gas to feed the black hole, producing x-ray emissions, but not enough to yield a strong burst of star formation.' Further observations at mid- to far-infrared wavelengths, expected later this year from the Spitzer Space Telescope, may help confirm this. The Spitzer data will provide a better indication of the dust content of the galaxy, a crucial variable in interpreting these observations, Melbourne said. The laser guide star adaptive optics system was funded by the W. M. Keck Foundation. The artificial laser guide star system was developed and integrated in a partnership between the Lawrence Livermore National Laboratory and the W. M. Keck Observatory. The laser was integrated at Keck with the help of Dee Pennington, Curtis Brown, and Pam Danforth. The NIRC2 near-infrared camera was developed by the California Institute of Technology, UCLA, and the Keck observatory. The Keck Observatory is operated as a scientific partnership among CalTech, the University of California, and the National Aeronautics and Space Administration. This work has been supported by the Center for Adaptive Optics, a National Science Foundation Science and Technology Center managed by UC Santa Cruz.
https://www.physlink.com/news/012305CollidingGalaxies.cfm
New telescope technology improves view of 'blandest' Uranus Sen—At one time, we only knew Uranus as a small point of light in a telescope. Then, 205 years after its discovery in March 1781, scientists excitedly geared up for a close-up view with the Voyager 2 spacecraft. The ship sped by the distant planet, taking a flurry of images with its visual-light camera. It revealed a practically featureless blue world sheathed in methane. Unfortunately for Uranus scientists, its moons appeared more interesting. Two, Oberon and the largest moon Titania, were discovered on this day in 1787 by William Herschel. "Uranus got the reputation as the blandest planet in the solar system. As far as Voyager was concerned, that was certainly justified," recalled Larry Sromovsky, a University of Wisconsin-Madison planetary scientist who was on the Voyager team. That impression was wrong. Space missions take decades to plan. After the featureless ball of Uranus made its way into newspapers and television broadcasts, public attention turned to other planets - such as Mars. But Sromovsky, who will probably face Earth-bound observations of Uranus for the rest of his scientific career, takes comfort in how much telescope technology has improved. The first breakthrough came after the Hubble Space Telescope turned a near-infrared camera towards Uranus in the 1990s. The telescope, first launched in 1990, sits above Earth's planet-blurring atmosphere. It turned out that looking at Uranus in other wavelengths was the key. The University of Arizona's Erich Karkoschka and his collaborators found several cloud features on Uranus, and through tracking them estimated the blistering wind speed found on the planet. Today, scientists believe winds can reach as high as 900 kilometers an hour. "It's looking at salt on a black surface, instead of a white surface. You can see things," Sromovsky said. But even Earthly telescopes have better resolution than astronomers dreamed of centuries ago. Through an advance called adaptive optics, telescopes can autonomously adjust their mirrors to compensate for the constantly shifting atmosphere. Hubble has a relatively small mirror, at 2.4 metres across. Sromovsky's instrument of choice today is the 10-metre W.M. Keck Observatory in Hawaii. Although it is best known for exoplanet hunting, the telescope's resolution allows it to see large features on Uranus. Keck stands apart from other "adaptive optics" systems because the system can calibrate itself on a planet, which looks relatively large in a telescope. Most other telescopes require a point of light, such as a star. Because Keck can calibrate on Uranus, it is easier to perform observations on the planet because the calibration and observations are on an object of the same brightness, Sromovsky pointed out. Other telescopes require calibration using one of Uranus' moons. Just recently, Sromovsky put the telescope to good use. Sromovsky and his colleagues examined the clouds of Uranus to see what sort of weather the planet has. They found cloud features on the planet that have never been seen before at the south pole. "It's insanely more detailed," Sromovsky said. While most future Keck projects are on hold due to uncertainty in U.S. federal astronomy funding, Sromovsky does have hours of recent Hubble observations of Uranus in the can. "It will tell us something about the distribution of methane in the atmosphere of Uranus," he said, adding this follows on from puzzling 2002 observations showing depletion of methane at high latitudes. But why the methane is depleted is still a mystery. It's just one reason Sromovsky is eager to return to Uranus with a spacecraft. "I'm not real optimistic about anything significant happening in that area," he added. "[American] planetary budgets seem to be shrinking rather than growing."
https://www.sen.com/news/new-telescope-tech-improves-view-of-blandest-uranus.html
Maintained by W-W Starlight is strongly affected by temperature fluctuations in the Earth's atmosphere. Winds blow these fluctuations across the telescope causing stellar images appear to shift in position and to break up into speckles. Over time, the image blurs, ending up many times larger than it would have been in the absence of the atmosphere. As a result, the resolution and sensitivity of ground based telescopes are significantly degraded from ideal values. Seeing, as astronomers refer to these degradations, varies with the site, the season and the time of day, but it is always there, and always a problem. One way of avoiding atmospheric seeing problems is to place a telescope in orbit above Earth's atmosphere. The Hubble Space Telescope (HST), for example, can achieve images well below 0.1 arc second in width whereas seeing limited images on Mauna Kea are at best three times larger. Even small telescopes in space are very expensive, however, and it is hard to launch large aperture telescopes. Consequently, space doesn't offer the large collecting area needed for high sensitivity. One, Earthbound eight meter telescope has more than ten times the collecting area of the Hubble Space Telescope. So we have high resolution telescopes in space, but with low sensitivity and large aperture telescopes on the ground, but with seriously degraded performance due to seeing. Adaptive Optics (AO) is a growing technology area that tries to restore the performance inherent in large ground based telescopes by partially correcting the effects of the atmosphere. The simplest adaptive optics system is one that corrects for the seeing induced tip-tilt excursions stellar image. The position of the position of the star is constantly measured and then corrected by steering the light beam with a fast controllable mirror. Since image tilt changes with a characteristic time scale of order 0.1 seconds, the correction system must operate about ten times faster in order for the corrections to be valid for a reasonable fraction of the time. Tip-tilt systems are in use on many telescopes and can narrow the image width by about a factor of two. The next step is to correct more that just the wandering of the image by trying to correct higher order errors like focus created by the atmosphere. If enough of the starlight can be restored to its state prior to entering the atmosphere then the image core is close to its ideal value and we can have large, high sensitivity telescopes that also offer high resolution The key components of any AO system are a Wave Front Sensor (WFS) to measure the incoming light and a Deformable Mirror (DM) to correct it. Most AO systems in the world measure the slope of the incoming wave front and then use push-pull actuators behind a thin mirror to correct the wave front. In 1988 Francois Roddier, working at the IfA, discovered a new way to measure the starlight and showed that when coupled to a different type of correcting element, a very powerful, very simple AO system could be built. These curvature adaptive systems were developed at the IfA and have slowly grown in use around the world. In June1999, the eight meter Gemini telescope was dedicated on Mauna Kea using a 36 element curvature system built at the IfA. Current research at the IfA is focused on extending the application of curvature adaptive systems by increased understanding of these systems, developing a ready source of components and demonstrating high performance systems. The IfA's Adaptive Optics Group currently consists of Christ Ftaclas, Mark Chun, Peter Onaka, and Doug Toomey. Adaptive Optics Research at the University of Hawaii is supported by the National Science Foundation, The Gemini Observatories and the Institute for Astronomy.
http://www.ifa.hawaii.edu/research/Adaptive_optics.shtml
Last December, astronomers commanded the world’s biggest telescope, perched near the windswept summit of Hawaii’s Mauna Kea, to gaze deep into the Andromeda galaxy. The W.M. Keck II Telescope had studied the galaxy, one of our nearest neighbors, many times before, but now the stakes were higher. The 10-meter instrument had recently been outfitted with a rapidly adjustable mirror designed to minimize the distortions imposed by Earth’s turbulent atmosphere. If the new system worked, astronomers would see details no groundbased telescope had ever seen before. Science News headlines, in your inbox Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday. Thank you for signing up! There was a problem signing you up. Inside the telescope’s control room 11,000 feet below the mountain’s summit, engineer D. Scott Acton and his colleagues closely monitored Keck’s progress. One hundred stars, near-pinpoints of light, popped into view. “It was just incredible,” recalls Acton, “to realize we were sitting there, the first humans to see those stars.” Such images graphically reveal the transformation the Keck II Telescope has undergone. It and its twin, Keck I, were already renowned for their ability to capture sharp images of the heavens. Thanks to its corrective mirror, which flexes hundreds of times a second to compensate for Earth’s rapidly changing atmosphere, Keck II now can take near-infrared pictures that are 20 times more detailed than before. Some of the images have four to five times the resolution recorded in infrared light by the Hubble Space Telescope. Pinpoint image Subscribe to Science News Get great science journalism, from the most trusted source, delivered to your doorstep. Light from a distant star or galaxy races across space like ocean waves moving in parallel across the sea. Each crest marks a wave front, a surface moving uniformly at constant speed. Under ideal conditions, a telescope intercepting and focusing such a wave would form a pinpoint image corresponding to the original light source. Earth’s atmosphere, however, is not a uniform optical medium. Variations in temperature and density create patches of air that deflect or slow the light passing through them. When a wavefront runs into the atmosphere, it breaks into an incoherent mess, and a telescope attempting to focus this light creates a fuzzy, quivering blob. If Earth had no atmosphere, a large ground-based telescope could concentrate light from a heavenly object into a spot. Its width would be determined only by diffraction—the unavoidable spreading of light rays as they pass through an optical system. Even at its calmest, however, the atmosphere blurs star images to a diameter at least 10 times greater than a large telescope’s natural diffraction limit. Not only does the atmosphere cause the image to wobble, it also changes the brightness from moment to moment. In short, stars—and galaxies—appear to twinkle, and astronomers try their hardest to take the twinkle out. They have two options. They can launch observatories into space, well above the troublesome atmosphere. That’s a costly proposition, however, and not everyone has a spare $2 billion for a device like the orbiting Hubble Space Telescope. For telescopes on the ground, scientists can try to undo the damage that the atmosphere inflicts. Some have begun doing just that. Their first step is to assess the distortion. Because the blurring changes so quickly, this measurement must be repeated many times a second. Astronomers have devised systems that measure twinkling by analyzing the image of a nearby reference source, such as a bright star near the object of interest. First, they measure how much the reference star’s appearance deviates from that of a point source of light. Then a computer calculates how to cancel out the twinkling by altering the shape of a small deformable mirror within the telescope. Tiny pistons in back of the mirror change its shape hundreds of times a second. “We have this nonplane wave that hits the deformable mirror, but by the time it bounces off, it’s a plane wave again,” says Acton. The basic idea of such systems, known as adaptive optics, was first proposed by astronomers in the 1950s. They came up with a practical design more than a decade ago, working from earlier military designs to detect faint targets, such as spy satellites. Several adaptive optics systems are now in use, with Keck II ranking as the biggest telescope to get this fix. Its deformable mirror features 349 pistons that can push and pull up to 672 times a second. Engineers plan to install a similar device on Keck I this summer. “The idea [of using adaptive optics in astronomy] has been around for a decade, but now we’re going from a formula to the real thing,” says astronomer John C. Mather of NASA’s Goddard Space Flight Center in Greenbelt, Md. “It’s a very, very simple concept, but it’s almost impossible to do in practice,” notes Acton. Among the obstacles, an adaptive optics system requires a bright reference source close to the heavenly object that an astronomer wishes to study. That requirement typically permits a system to observe only a small fraction of the sky. Indeed, Keck’s current setup can operate over less than 10 percent of the heavens. To expand their horizons, scientists recently developed laser systems that create reference stars on demand. Intense laser light directed into the sky tickles a layer of sodium atoms 90 kilometers above Earth. The radiation emitted by the stimulated atoms acts as an artificial star. A laser system is expected to be in operation at Keck II by the end of the year, Acton says. Researchers are also testing systems that have multiple lasers and more than one deformable mirror. These would enable astronomers to more precisely eliminate atmospheric distortion and enlarge adaptive optics’ field of view. A more intractable problem is that adaptive optics works best at long wavelengths, where atmospheric distortion has less effect. For instance, it’s much more effective for observations in the near-infrared than in visible light. Keck’s enhanced vision has already inspected a range of celestial sights—storms on Neptune, a lava fountain on Jupiter’s moon Io, brown dwarfs in the Milky Way, and black holes in galaxies far beyond. Acton and other astronomers unveiled the new images in January at a meeting in Atlanta of the American Astronomical Society. “As you sit at the control room for this adaptive optics system at the telescope night after night after night, it gets to become routine,” says Acton. “But every now and then, I stop and really think about what we’re doing. . . . If you look at some of the higher-resolution images that we’ve taken, they’re equivalent to pointing a telescope at somebody who’s standing 250 miles away from you and saying, ‘Oh, I know that dude.'” Giant storm Last May, Keck II homed in on a giant storm featuring ferocious winds. Claire E. Max of the Lawrence Livermore (Calif.) National Laboratory and her colleagues were using the system to record the sharpest images of Neptune ever taken from Earth. Other telescopes, including Hubble, had already viewed this storm, located in the planet’s southern hemisphere. However, they could not accurately discern its size. Without adaptive optics, it had appeared to cover one-third of the planet’s disk, but the sharper Keck II observations show that the storm is less than one-tenth that size. Such information will be crucial to understanding how the storm arose and maintains its shape, says Max. Unlike Earth, Neptune gives off more energy than it receives from the sun. The heat generated from its core, which is still undergoing gravitational contraction, powers the violent activity at the planet’s surface, including winds blowing up to 1,800 kilometers per hour. In 1989, Voyager flew past Neptune and spied another storm, dubbed the Great Dark Spot because of its appearance in visible light. That spot, about 20º south of the planet’s equator, has since disappeared, and planetary scientists are curious if the new storm, which is much farther south, bears any similarity. One clue that it may differ is that the storm region, which appears bright in near-infrared light, does not look dark in visible light. Max and her colleagues also viewed the surface of Saturn’s moon Titan, which is surrounded by a hydrocarbon haze. Theory suggests that sunlight breaks some of the methane in Titan’s atmosphere into ethane, which may rain down on the surface and form lakes or oceans. Keck’s near-infrared portrait shows a surface mottled with light and dark patches. The images are sharp enough that Max hopes to use a newly installed Keck II spectrograph to determine the composition of the patches. “The real promise of these Titan images is actually to be able to lay a spectrograph slit across them . . . and not have the spectrum at one point be contaminated by light that dribbles in from some other place,” she says. In particular, Max hopes to determine whether the dark patches are lakes. If she succeeds, adaptive optics will have achieved a major discovery about Titan well before the Saturn-bound Cassini mission parachutes a probe onto the icy moon in 2004. A chance observation of Jupiter’s volcanically active moon Io appears to have had a special payoff. Last November, the Galileo spacecraft obtained a close-up view of Io. The craft observed a fiery fountain of lava shooting more than a mile above a volcanic crater on the moon’s surface. The lava was so hot and bright that it overexposed part of the Galileo image, leaving a bright blur. Two days later, Keck II happened to take a look at Io. The Keck image shows a bright spot extending from the illuminated edge, or limb, of the moon. “We honestly didn’t know that Galileo had discovered a rarely seen lava fountain, a kind of Mount Saint Helens, on Io,” says Acton. The orientation of the bright feature seen by Keck suggests it’s the same lava fountain “shooting out off the limb of the moon,” he says. “Small” structures Viewing the center of our galaxy, the site of a suspected black hole, the Keck system can discern structures as small as five times the size of the solar system. This enables astronomers to examine structures one-tenth the size previously attainable. Keck’s adaptive optics also is shedding light on distant galaxies, which appear as they were when the universe was young. These youthful galaxies tend to be smaller than today’s, and their visible light has been shifted to the infrared by cosmic expansion, notes James E. Larkin of the University of California, Los Angeles. In some of the more remote galaxies, billions of light-years from Earth, Keck can discern features as small as 8,000 light-years across. That’s one-fifteenth the size possible without adaptive optics. Last April, Larkin and his colleagues used the telescope to take the sharpest image ever recorded of a distant, faint galaxy. This galaxy happened to lie near a bright reference star. Observations reveal that the galaxy resides about 4 billion light-years away, indicating that it hails from a time when the cosmos was about two-thirds its current age. The researchers found that the core of the galaxy, a region extending 1,000 light-years from the center, shines brightly. Either the region is undergoing a burst of star formation or a giant black hole has gathered stars around it, Larkin says. Velocity measurements of the stars, which he hopes to take with the new spectrograph, should determine why this region is abuzz with activity. Hunting planets Another revolution may be waiting in the wings at Keck. A near-infrared camera that astronomers hope to install within the year may prove an invaluable tool for hunting planets outside the solar system, says David E. Trilling of the University of California, Santa Cruz. The camera features a coronagraph, a mask that blocks out the glaring light from stars and allows astronomers to search for the extremely faint light from companion objects—either brown dwarfs or planets. In combination with the adaptive optics system, the camera might produce the first images of extrasolar planets, says Trilling. “I have a feeling that bizarre and wonderful planets and brown dwarfs will fall out of the sky almost immediately,” he predicts.
https://www.sciencenews.org/article/getting-clear-view
Two Livermore Scientists to be Inducted into Alameda County Women's Hall of Fame LIVERMORE, Calif. — Lawrence Livermore National Laboratory scientists, Claire Max and Ellen Raber, are two of nine awardees to be inducted into the Alameda County Women’s Hall of Fame. The Alameda County Board of Supervisors, the Alameda County Health Care Foundation and the Commission on the Status of Women recognized Max, a Berkeley resident, and Raber, a Livermore resident, for their contributions in the fields of science and environment, respectively. They will be honored March 8 during the 10th Annual Women’s Hall of Fame Awards ceremony. Max, an astrophysicist with LLNL’s Institute of Geophysics and Planetary Physics and associate director of National Science Foundation’s Center for Adaptive Optics based at UC Santa Cruz, specializes in the use of adaptive optics to minimize the blurring effects of turbulence in the Earth’s atmosphere so that astronomers can view celestial objects through ground-based telescopes as clearly as if the telescopes were space-based. The system uses light from a relatively bright star to measure the atmospheric distortions and to correct for them, producing images with unprecedented detail and resolution. In the early 1980s, at about the same time as French researchers, Max proposed using a sodium laser guide star attached to large astronomical telescopes to better view virtually any star in the galaxy. The laser guide star system is used at UC’s Lick Observatory and the W.M. Keck Observatory in Hawaii. Max also was the founding director of Livermore’s IGPP and in 2002 was named a fellow of the American Academy of Arts and Sciences for her astrophysics research. She is a 29-year Laboratory employee. "It’s a great honor to be selected for this distinction," Max said. "I’m happy to serve as a role model and contribute to the betterment of the county that I live in." Raber, head of LLNL’s Environmental Protection Department, is a national leader in research and development efforts in pollution prevention, waste management, environmental restoration and environmental monitoring and analysis. In her role as head of the Laboratory’s Environmental Protection Agency, Raber has led the effort to accelerate the opening of LLNL’s state-of-the-art Decontamination and Waste Treatment Facility that replaces an older World War II-era facility. Raber has been a strong proponent of making sure that LLNL operates the best environmental facilities to ensure protection of its workers and the community. Raber’s most recent accomplishment has been to lead the development of a decontaminating gel (known as L-gel) that is effective against chemical and biological warfare agents but then breaks down to environmentally acceptable byproducts. The patented gel is available for use in civilian facilities in response to a terrorist incident or in the case of an accidental release of a biological or chemical agent. In her 22 years at Livermore, Raber has used her background in geochemistry to resolve a variety of environmental issues related to geothermal, underground coal gasification, the strategic petroleum reserve and nuclear waste management. She has also worked on technology to provide award-winning solutions for chemical weapons treaty verification and intelligence community applications. "The Laboratory takes its environmental responsibilities to the community and the nation very seriously," Raber said. "It is a privilege that both Claire and I will serve as the Laboratory’s inductees into the Hall of Fame." The Alameda County Board of Supervisor, the Alameda County Commission on the Status of Women and the Alameda County Health Care Foundation established the Alameda Women’s Hall of Fame in 1993. The purpose of the hall of fame is to recognize outstanding women for their achievements and contributions to the overall well being of Alameda County and its residents. Founded in 1952, Lawrence Livermore National Laboratory is a national security laboratory, with a mission to ensure national security and apply science and technology to the important issues of our time. Lawrence Livermore National Laboratory is managed by the University of California for the U.S. Department of Energy’s National Nuclear Security Administration. Laboratory news releases and photos are also available electronically on the World Wide Web of the Internet at URL http://www.llnl.gov/PAO and on UC Newswire.
https://www.llnl.gov/news/two-livermore-scientists-be-inducted-alameda-county-womens-hall-fame
Telescopes are essentially giant eyes that collect more light than our own eyes. By combining this light-collecting capacity with cameras and other instruments such as spectrographs that disperse the light into spectra, we can record and analyze light in detail. Light-collecting area is the first key property of a telescope that determines how much light the telescope can collect at one time. For example, a 10-meter telescope has a light-collecting area of 10 meters in diameter. The second key property of a telescope is angular resolution. Angular resolution is the smallest angle over which we can tell that two dots (or two stars) are distinct. The figure above shows the same two stars; the image on the left was taken using a telescope with low angular resolution while the image on the right has a higher angular resolution. Telescopes collect far more light and allow us to see far more detail than we could with the naked eye. The human eye has an angular resolution of about 1 arc minute or 1/60th of a degree. Our eyes cannot see stars separately if the distance between them is less than 1 arc minute. In contrast, the Hubble telescope has an amazing angular resolution of 0.05 arcsecond for visible light, meaning that you could read this document from a distance of over 1 kilometer! The angular resolution determines the amount of detail we can see with a telescope. Telescopes come in 2 basic designs - refracting and reflecting. Refracting telescopes operate similar to the eye using transparent glass lenses to collect and focus light. Galileo's earliest telescopes were refracting telescopes. Instead of a lens, which can get very thick and heavy, Reflecting telescopes use a precisely curved mirror to gather light. The primary mirror reflects the gathered light to a secondary mirror that lies in front of it. The secondary mirror then reflects the light to a focus at a place where the eye or instruments can observe it. This can be a hole in the primary mirror or along the side of the telescope. Telescopes today are specialized and can observe different wavelengths of light beyond the visible spectrum, allowing us to learn far more than we could learn from visible light alone. Astronomers have developed interferometry, a technology where multiple smaller telescopes can work together to obtain the angular resolution of a much larger single telescope. An example of this is the Karl Janksy Very Large Array in New Mexico that consists of 27 radio telescopes that can be moved along train tracks. Working together, the telescopes achieve an angular resolution that would otherwise require a single radio telescope with a 40 kilometer diameter. Modern day astronomers also use telescopes to study sources of information other than light - neutrinos, cosmic rays and gravitational waves. Neutrinos are subatomic particles produced by nuclear reactions in stars. Astronomers have used neutrino telescopes underwater and in deep mines to learn about the Sun and stellar explosions. High energy subatomic particles from space, called cosmic rays, are also being studied by astronomers. So far, little is known about their origin. Finally, gravitational waves as predicted by Einstein's general theory of relativity are now being directly observed with the recent development of gravitational wave telescopes. Telescopes are placed in space so that they can avoid the distorting effects of the Earth's atmosphere such as light pollution and turbulence from air movement. Examples of telescopes in space include the Chandra X-ray Observatory, the Hubble Space Telescope and the new James Webb Space Telescope for 2018. Much of the electromagnetic spectrum can be observed only from space and not from the ground. Only radio waves, visible light, the longest wavelengths of ultraviolet light, and small parts of the infrared spectrum can be observed from the ground. Telescopes in space allow us to observe the rest of the light that does not penetrate Earth's atmosphere. Twinkling of stars occurs because of air turbulence. Stars tend to twinkle more on windy nights and when they are close to the horizon. Above the atmosphere in space, they do not twinkle at all. However, building large telescopes on the ground is much cheaper than launching and maintaining a telescope in space. Now, telescopes with a new technology called "adaptive optics" are able to overcome much of the blurring caused by our atmosphere. All of these new developments are helping astronomers to learn more about the universe than ever before.
http://instantcertcredit.com/courses/4/lesson/233