content
stringlengths 71
484k
| url
stringlengths 13
5.97k
|
---|---|
The Hobbit: An Unexpected Journey – reviews
The first reviews of Peter Jackson's fantasy epic are beginning to emerge from Middle-earth (and beyond) ... so, is it any good?
The first reviews of The Hobbit: An Unexpected Journey have started to appear online and the verdict so far is…well, rather mixed.
While some critics love the film’s charming characters and gorgeous New Zealand setting, about as many again have taken issue with its pacing and length.
Review aggregator Rotten Tomatoes suggests that The Hobbit’s average raging thus far is 6.6/10, though as the film still isn’t out for another nine days that’s bound to change for the better before long.
Anyway, here’s what the critics have had to say so far:
Collider gave the film a warm reception, calling it a “classic adventure quest” saying: “The Hobbit: An Unexpected Journey is a classic adventure quest in the making; packed with colorful characters, gorgeous settings and plenty of action, the only setbacks are technical ones.
“While The Hobbit: An Unexpected Journey is destined to be a stand-alone adventure classic in the vein of The Neverending Story, Willow and Legend, it is surely strongest when viewed as a satisfactory part of a greater whole.”
However, Variety criticised the film’s length and overambitious cinematography in its review, which read: “While Peter Jackson’s prequel to “The Lord of the Rings” delivers more of what made his earlier trilogy so compelling — colorful characters on an epic quest amid stunning New Zealand scenery — it doesn’t offer nearly enough novelty to justify the three-film, nine-hour treatment, at least on the basis of this overlong first instalment…
“More disconcerting is the introduction of the film’s 48-frames-per-second digital cinematography, which solves the inherent stuttering effect of celluloid that occurs whenever a camera pans or horizontal movement crosses the frame — but at too great a cost.
“Consequently, everything takes on an overblown, artificial quality in which the phoniness of the sets and costumes becomes obvious, while well-lit areas bleed into their surroundings, like watching a high-end homemovie.”
Similarly, The Hollywood Reporter took issue with the length of the film, saying that it’ll be treat for fans of Tolkien’s novel but a “bit of a slog” for the general viewer.
It said: “Spending nearly three hours of screen time to visually represent every comma, period and semicolon in the first six chapters of the perennially popular 19-chapter book, Jackson and his colleagues have created a purist’s delight, something the millions of die-hard fans of his Lord of the Rings trilogy will gorge upon.
“In pure movie terms, however, it’s also a bit of a slog, with an inordinate amount of exposition and lack of strong forward movement… In Jackson’s academically fastidious telling, it’s as if The Wizard of Oz had taken nearly an hour just to get out of Kansas.”
Tolkien fan site The One Ring offered up a concise review, which picked up on some “confusing” elements in the film’s plot.
Its reviewer wrote: “To be honest, yes there are bits that are a bit confusing and may feel misplaced, but I will want to watch it a second time before I pass final judgement on this film. I loved the ending, it is rather awesome.”
The Playlist, on the other hand, said that the film was a “grand achievement” for its director and suggested that fans of The Hobbit are in for a “thrilling ride”.
It said:”While it will be too formulaic and familiar to some (and certainly non-fans won’t be won over), ‘The Hobbit’ is another grand achievement from director Peter Jackson.
“As epic, grandiose, and emotionally appealing as the previous pictures, ‘The Hobbit’ doesn’t stray far from the mold, but it’s a thrilling ride that’s one of the most enjoyable, exciting and engaging tentpoles of the year.”
While Cinema Blend joined the chorus of voices crticising the film’s length, the site also praised Peter Jackson’s knack for storytelling: “The Hobbit feels in its first half very much like a brief story stretched far too thin, it eventually settles into its own enjoyable rhythm, a comic adventure that’s a good enough excuse to make a return visit to Middle Earth.
“An Unexpected Journey is proof that Jackson still has a knack for stories in this world, and that he may have more surprises in store as the rest of this new, unexpected trilogy unfolds.“
New Zealand-based news site Stuff.co.nz gave the film a rave write-up and praised Peter Jackson for being “back at his game”. It said: “The Hobbit: An Unexpected Journey has enough similarities to LOTR that it will appeal to fans of the trilogy, but at the same time it carries its own feel and aesthetic – to be a beautiful beast in its own right.
“Great cast, great special effects and great entertainment. Yes, Peter Jackson is back at his game, and I can’t wait to see if he keeps it up in what’s to come.”
And The New Zealand Herald also gave the film a tip-top appraisal, calling it looser, funnier and scarier than Lord of the Rings. The paper’s write-up said: “Phew. After all that, it’s a movie. A ripper of a film it is too… It’s also a film which feels looser, funnier and often outright scarier than Jackson’s last venture into this territory. | https://www.radiotimes.com/tv/fantasy/2012-12-04/the-hobbit-an-unexpected-journey-reviews/ |
Another of the famous Lord of the Rings film locations can be found at Mount Olympus in the Kahurangi National Park.
In South of Rivendell, the fellowship hid under the pinnacle rock formations, out of sight of Saruman’s black crows. Mount Olympus is situatued in the remote back-country and is accessible only by helicopter.
As you land on the mountain-top, you’ll discover scenery so surreal, so wild and wonderful, that it almost feels like you’ve stepped into another world. But Peter Jackson didn’t rely on special effects to showcase New Zealand’s stunning natural landscapes, because the landscapes speak for themselves.
Photo credits: www.nelsontasman.nz. | https://www.nelsontasman.nz/visit-nelson-tasman/plan-your-trip/activities/3335-mt-olympus |
Here is the question :
IN WHICH COUNTRY WAS THE LORD OF THE RINGS TRILOGY FILMED?
Here is the option for the question :
- Switzerland
- Scotland
- Canada
- New Zealand
The Answer:
And, the answer for the the question is :
New Zealand
Explanation:
[STC003456] New Zealand is the place to go if you want to experience what it would be like to travel to the Shire. This country is most well-known for its role as the location for the blockbuster film Lord of the Rings. The majestic landscape of New Zealand, which properly portrayed the many parts of Middle Earth thanks to its towering mountain peaks, grassy plains, and quaint woods, was reminiscent of the country of Middle Earth. Today, you can take tours of a lot of the areas where the movies were filmed, from Hobbiton to Mordor. | https://apaitu.org/in-which-country-was-the-lord-of-the-rings-trilogy-filmed/ |
Tradition comes to an end with the announcement that Amazon Studios is moving shooting of their adaption of ‘The Lord of the Rings’ from New Zealand to the United Kingdom as the production of season two gets ready to kick off.
Director Peter Jackson famously shot both ‘The Lord of the Rings’ and ‘The Hobbit’ trilogies in New Zealand for New Line Cinema.
While season one of Amazon’s take on J.R.R. Tolkien’s Middle Earth series was filmed in New Zealand, the production almost moved to Scotland following disputes with local officials, but those conflicts were resolved and filming began near Auckland.
Season two now seems to be moved to the United Kingdom with pre-production slated to begin in early 2022.
Amazon Studios said in a statement, “Amazon Studios announced (Aug. 12th) that its untitled The Lord of the Rings original series will film Season Two in the United Kingdom (U.K.). The shift from New Zealand to the U.K. aligns with the studio’s strategy of expanding its production footprint and investing in studio space across the U.K., with many of Amazon Studios’ tentpole series and films already calling the U.K. home.”
“The highly anticipated The Lord of the Rings series recently wrapped principal photography on Season One in New Zealand and is scheduled to premiere on Prime Video in more than 240 countries around the world on Friday, September 2, 2022,” they continued.
Amazon Studios exec Vernon Sanders said, “We want to thank the people and the government of New Zealand for their hospitality and dedication and for providing The Lord of the Rings series with an incredible place to begin this epic journey.” He added, “We are grateful to the New Zealand Film Commission, the Ministry of Business, Innovation and Employment, Tourism New Zealand, Auckland Unlimited, and others for their tremendous collaboration that supported the New Zealand film sector and the local economy during the production of Season One.”
COO & Co-Head of TV for Amazon Studios Albert Cheng explained, “As we look to relocate the production to the U.K., we do not intend to actively pursue the Season One MoU five percent financial uplift with the New Zealand government or preserve the terms around that agreement, however, we respectfully defer to our partners and will remain in close consultation with them around next steps.”
‘The Lord of the Rings’ series will serve as a “prequel” to the events of ‘The Hobbit,’ taking place in the second age and is rumored to tell the story of the rise of Sauron and the formation of the One Ring.
D-Rezzed is an independent, opinionated fan-powered news blog that covers Pop Culture from a consumer's point of view. We talk about Gaming, Comics, Anime, TV, Movies, Animation and more. Opinions expressed by our contributors do not necessarily reflect the views of D-Rezzed, its editors, affiliates, sponsors or advertisers. This site contains affiliate links to products. We may receive a commission for purchases made through these links. | https://drezzed.clownfishtv.com/amazon-moves-the-lord-of-the-rings-production-to-uk-for-second-season/ |
Peter Jackson Ends 'The Hobbit' Trilogy With 'Battle Of The Five Armies'
New Line Cinema and Metro-Goldwyn-Mayer Pictures' “The Hobbit: The Battle of the Five Armies” represents the culmination of director/co-writer/producer Peter Jackson’s 16-year journey to bring to life the richly layered universe of Middle-earth conjured nearly a century ago by J.R.R. Tolkien in his literary masterworks The Hobbit and The Lord of the Rings.
The Hobbit, or There and Back Again was first published in 1937, having emerged from the revered author, poet, university professor and philologist’s imagination as bedtime stories for his children. In the 17 years that followed, Tolkien continued to develop, expand and enrich the complex mythology of Middle-earth to produce its sprawling, apocalyptic conclusion, The Lord of the Rings. Collectively, the author’s towering modern myth has had a seismic impact on world culture, becoming among the best-selling novels ever written, and sparking the imaginations of generations of readers all over the world.
Among them was a teenaged Peter Jackson, who took his first dive into Middle-earth while traveling by train across his native New Zealand—but it wouldn’t be his last. As early as 1995, the filmmaker explored the idea of adapting The Hobbit for the screen, hoping to then move on to adapt The Lord of the Rings. Instead, Jackson ultimately reversed the journey that Tolkien himself had taken—telling the end of the story first with his landmark, Oscar-winning “The Lord of the Rings” Trilogy, then plunging back into the fully realized world he’d created to bring the mythology’s seminal origins to life with the same vast scale, technical mastery and emotional resonance in “The Hobbit” Trilogy.
To embody the iconic roles introduced in this earlier tale, the filmmakers assembled a core of gifted actors, including Martin Freeman as the Hobbit Bilbo Baggins, Richard Armitage as the Dwarf Lord Thorin Oakenshield, Luke Evans as Bard the Bowman, Evangeline Lilly as Silvan Elf Warrior Tauriel, Lee Pace as Elvenking Thranduil of the Woodland Realm, Billy Connolly as Dwarf General Dain Ironfoot of the Iron Hills, and Benedict Cumberbatch breathing life into the Trilogy’s iconic villains, the Dragon Smaug and the Dark Lord Sauron.
The new trilogy would also reunite the director with members of the celebrated cast of “The Lord of the Rings” films nearly a decade after their release, including Ian McKellen as the Wizard Gandalf the Grey; Cate Blanchett, Hugo Weaving and Orlando Bloom as High Elves Galadriel, Elrond and Legolas, respectively; Christopher Lee as the Wizard Saruman the White; Ian Holm reprising his role as the older Bilbo Baggins; and Andy Serkis returning to his memorable incarnation of Gollum in the first film, “The Hobbit: An Unexpected Journey,” as well as serving as second unit director for the entire Trilogy.
Together, Jackson, his close band of filmmaking collaborators and international ensemble cast embarked on a new adventure—plunging into a nine-month-long filmmaking journey across New Zealand to simultaneously create all three films, releasing the first film, “The Hobbit: An Unexpected Journey,” in 2012, and following a year later with “The Hobbit: The Desolation of Smaug.” This compelling cinematic journey now reaches its epic conclusion with the release of the third and final film, “The Hobbit: The Battle of the Five Armies.”
As Jackson prepares to sweep audiences back to Middle-earth one last time, he reflects that throughout his epic filmmaking odyssey through Middle-earth, his true north has always remained his passion for the artistic legacy of Tolkien and his own desire to see it brought to vibrant, visceral life on the big screen.
“When we made ‘The Lord of the Rings’ films, there was a lot of pressure because it was a big project done in a way that was unprecedented, and we didn’t have the track record then that we have now,” says the Oscar-winning filmmaker. “Those films went out into the world, and have now become part of the culture, so that created a different kind of pressure on ‘The Hobbit’ movies. But the only way you can respond to that is to be truthful to yourself as a filmmaker. With everything I’ve done in my career, I’ve tried to make films that I would enjoy as a moviegoer. To see the first two ‘Hobbit’ films be embraced by fans has been a joy, because we’re fans as well. But it’s also exciting to introduce a new generation to this world and this incredible mythology for the first time with the story where it all begins.”
The film also sets the stage for the Middle-earth audiences will encounter 60 years in its future, when the next trilogy begins with “The Lord of the Rings: The Fellowship of the Ring.” Jackson observes, “We come to understand how Bilbo’s adventure fits within the entire story and the true stakes of the Battle of the Five Armies, not just for the characters but for all of Middle-earth. Tolkien worked his way up, and we worked our way down to blend the two trilogies, which has been both a challenge and a lot of fun in terms of weaving in threads will continue into ‘The Lord of the Rings’ films.”
As the final chapter of one epic journey and provocative prelude to the next, “The Hobbit: The Battle of the Five Armies” serves as the powerful fulcrum of the entire Middle-earth legend. “We’ve been aware that people may not watch these films in the order that they were made 20 years into the future, but will start at the beginning and watch straight through to the end,” Jackson reflects. “So, so as we’ve made ‘The Hobbit’ films, we’ve consciously progressed the tone to the place where, hopefully, the audience will feel that they’ve gone on that journey into ‘The Fellowship of the Ring,’ and, ultimately, to the cataclysmic conclusion of Middle-earth in ‘The Return of the King.’ Our hope is that for future generations, all six films will be experienced as part of a single continuous saga.”
Opening across the Philippines on Dec. 12, 2014 in theaters and IMAX®, “The Hobbit: The Battle of the Five Armies” is distributed by Warner Bros. Pictures, a Warner Bros. Entertainment Company.
Geek out by following The Film Geek Guy: | https://www.filmgeekguy.com/2014/12/peter-jackson-ends-hobbit-trilogy-with.html |
The first part of director Peter Jackson’s phenomenal movie trilogy is based, of course, on the first of JRR Tolkien’s famous three books about the hobbits, elves and other creatures who populate the land known as Middle Earth.
Hobbit (a small, hairy-footed creature) Frodo Baggins (Wood) is entrusted with a dangerous ring that can give the wearer great power. On the advice of wizard Gandalf (McKellen), Frodo and his friend Sam (Sean Astin) must leave their home and travel great distances to the one place, Mordor, where it can be destroyed, and they are accompanied by a fellowship elected to protect them: Gandalf, warriors Aragorn (Mortensen) and Boromir (Sean Bean), elf Legolas (Bloom) and dwarf Gimli (John Rhys-Davies).
Translating a much-loved series of books to the screen was a massive task for Jackson to undertake – as well as including all the details and minor plot points fans would demand, he had to keep it exciting and interesting for viewers who had never read the novels – and he managed it brilliantly, filming the three movies in one go, over sixteen months, against stunning New Zealand scenery.
It’s a classic quest, packed with epic moments and an impressive cast (also including Christopher Lee as Saruman, Cate Blanchett as Galadriel, Liv Tyler as Aragorn’s love Arwen, and Hugo Weaving as Elrond). It’s dark and sinister and scary in places, funny and light in others – and brilliant throughout.
Do note that this is not a film for small children (who would have trouble sitting through the movie’s three-hour running time anyway), owing to elements such as the frightening-for-grown-ups-too Ringwraiths.
Is The Lord Of The Rings: The Fellowship Of The Ring suitable for kids? Here are our parents’ notes...
There are lots of fight scenes, the scariest being those involving the Orcs, who get impaled, stabbed, dismembered and, well, killed in various nasty ways.
Every fight scene throughout the movie involves blood, stabbings and grunting!
When Bilbo’s face changes and he snarls, it is quite startling.
The Black Riders are scary and the Orcs aren’t exactly attractive, either.
Although this is a PG, we wouldn’t recommend it to under-10s as it is a very dark movie.
If you like this, why not try: The Lord Of The Rings: The Two Towers, The Lord Of The Rings: The Return Of The King, Clash Of The Titans 2010, Dragonheart, Eragon, | https://www.movies4kids.co.uk/reviews/the-lord-of-the-rings-the-fellowship-of-the-ring-review/ |
Modern fantasy fiction owes a lot to Peter Jackson’s Lord of the Rings trilogy, with the New Zealand filmmaker taking on J. R. R. Tolkien’s novels with narrative, technical and creative mastery. Demonstrating how the contemporary blockbuster could flourish into the 21st century, Jackson’s trilogy laid down the gauntlet for other big-budget visions to match its grandeur, with very few films able to reach its greatness to this very day.
With a combined budget of approximately $270 million, the story behind the trilogy’s development is predictably just as interesting as the finished product itself, with the process involving months of prop preparation and location scouting.
Perhaps the most interesting part of the whole process happened at the very start, however, when an enthusiastic Peter Jackson was looking for producers for his fantasy epic. Pitching the project to Miramax, the filmmaker came face to face with the disgraced Hollywood producer and convicted rapist Harvey Weinstein, who was pressured by Disney at the time to make cuts to the epic proposed project.
With his sinister, egotistical personality common knowledge at this point, Weinstein turned on Jackson with a “Mr Hyde” temper for not accepting any cuts to his vision, threatening The Lord of the Rings filmmaker by saying he would replace him as the director with Quentin Tarantino. Recalling the moment, Jackson’s manager Ken Kamins explained in an interview with The Independent: “He’d threaten to get Quentin Tarantino to direct if Peter couldn’t do it in one film that was two-and-a-half hours – which was the exact opposite of what he initially told us he wanted”.
Annoyed at the treatment of Miramax and Weinstein, Jackson jumped ship and approached New Line CEO Robert Shaye who accepted the project but requested that it be made into a trilogy, with the rest being cinema history.
The sinister impact of Harvey Weinstein had left an indelible mark, however, with several members of the cast and crew eager to make a mockery of the hateful producer as a result of his actions. Elijah Wood, the actor who plays Frodo in the trilogy, explained how the prop department took their revenge on set in an interview on Dax Shepard’s podcast Armchair Expert.
“It’s funny, this was recently spoken about because Dom [Monaghan] and Bill [Boyd] have a podcast, The Friendship Onion. They were talking to Sean Astin about his first memory of getting to New Zealand,” the actor rambled on the podcast, before explaining the incident. “He had seen these Orc masks. And one of the Orc masks — and I remember this vividly — was designed to look like Harvey Weinstein as a sort of a fuck you,” Wood hilariously recalled.
17 years after the release of the final film in the Lord of the Rings trilogy, Return of the King, producer Harvey Weinstein was indicted on 11 charges of rape and sexual battery in Los Angeles, being sentenced to a total of 23 years in prison. Chuffed at the actions of his fellow Lord of the Rings crew members, Wood further added, “I think that is OK to talk about now, the guy is fucking incarcerated. Fuck him”.
Follow Far Out Magazine across our social channels, on Facebook, Twitter and Instagram. | https://faroutmagazine.co.uk/elijah-wood-orc-mask-moulded-on-harvey-weinstein/ |
The director Peter Jackson first expressed interest in making a film of the J.R.R. Tolkien book ‘The Hobbit’ in 1995. 17 years later, in 2012, the first of his 3 part series based on the book, ‘An Unexpected Journey’ was released, and finally, now in 2014, his series is complete. The reason for this delay was due to copyright issues, but in the gap between envisioning the films and actually making them, Jackson was luckily able to create an entire film series based on another adventure set in Middle-Earth, Lord of the Rings. After the success of this award-winning film series Jackson returned, with copyright problems now resolved, to make a film based on his original interest ‘The Hobbit’.
Some controversies over the length of this film series arose during its production. Originally, given the comparable size of the book to the previously made ‘Lord of the Rings’, Peter Jackson planned to make it into a single film, and also conceived a second film that would focus on the story leading from the end of the Hobbit to the beginning of Lord of the Rings, but eventually decided that there was too much to cover in the Hobbit alone and that they should just make the two parts focussing solely on this story. He did however include some extra segments from the appendices of Lord of the Rings in the films to pad out the story. Eventually, once all was filmed, Jackson approached the producers about splitting it into 3 films, since they had filmed so much good material that it seemed a shame to edit great deals of it out. Some have complained, saying that the move to making 3 films was merely so that the production companies could make more money from the films, with The Hobbit being an easily marketable franchise. Some fans in particular disliked the addition of whole storylines that were not in the original, feeling it took away from the focus of the real issues.
The films however are all very well made, and though there may be a few too many scenes, they are all high quality. They are similar to the Lord of the Rings in style, but are a little lighter in tone since the book was originally written for a younger audience. The film includes tense action-packed scenes, deep emotional scenes and funny scenes too, as well as some great awe-inspiring landscape shots, as the location for these films was, like their former New Zealand, which Jackson felt would mirror the landscapes of Middle Earth as described in the books.
Many familiar faces from the first film returned including Ian McKellen and Orlando Bloom, and there were guest appearances from Elijah Wood and Ian Holm as well, but the vast array of new cast members were equally fantastic, including Martin Freeman, Richard Armitage and Luke Evans. The special effects in the film are also of note, especially considering that during the first film, on a smaller budget they had to restrict their use of computer-generated imagery, and in this film you can see they had much more freedom to create some epic scenes digitally.
So while the films can sometimes drag on a little too long, this has never bothered me, since everything is still of a high standard and I think that it is this rich and detailed background that makes the world of Middle-Earth so exciting and real. The last part in this trilogy has just left cinema, but if you haven’t yet seen this story, it highly recommend seeing it when the series is released for home viewing.
Image from: https://p.gr-assets.com/540x540/fit/hostedimages/1380319610/689783.jpg
Be the first one to comment on this article. | https://www.kingsnews.org/articles/the-hobbit-trilogy |
We know there are questions around travel amid the coronavirus (COVID-19) outbreak. Read our note here.
It isn’t exactly a spoiler to say that most movies aren’t filmed where they’re set, especially when that setting is as fictional as the story. Even so, some shooting locations are better known than others: Every Lord of the Rings devotee knows that New Zealand is the real-life Middle Earth, whereas even some diehard “Die Hard” fans might not know where the real Nakatomi Plaza is. As you’re waiting to plan your next trip, here are five movies with surprising filming locations to watch in the meantime.
Star Wars: Tunisia
Depending on where you live, a certain galaxy far, far away isn’t actually too remote. Star Wars enthusiasts are well aware that Luke Skywalker was raised on Tatooine, a desert planet known for having two radiant suns, but what they might not know is that those scenes — and many others in George Lucas’ space-opera franchise — were filmed in Tunisia.
It wasn't just the North African nation’s mainland that served as a key location: Djerba, an island just off the coast, is where scenes involving the Mos Eisley cantina and Obi-Wan Kenobi’s home were shot. Many of the best-known Tunisian locations are from the original film, 1977’s Episode IV: A New Hope, but scenes were shot there across both the original trilogy and the three prequels released between 1999 and 2005. Today you can take tours of some of the most important filming locations, including one that allows you to stay overnight in Luke’s humble abode in the town of Matmata.
Avatar: China
The highest-grossing film of all time until it was dethroned by Avengers: Endgame, Avatar was so transportive that some fans of the sci-fi epic experienced depression brought about by the realization that they would never go to the fictional world of Pandora. The next best thing would be to visit Zhangjiajie National Forest Park in China’s northwest Hunan Province, whose Huangshan mountains inspired one of the blockbuster’s most memorable sequences.
Anyone who’d previously seen the park’s “Heavenly Pillar” likely recognized it as the inspiration for Avatar's floating Hallelujah Mountains, which lend the film much of its visual magic. No surprise, then, that the Pillar has become an increasingly more popular tourist attraction in spite of the fact that the movie itself was mostly shot in New Zealand and Los Angeles. There are myriad guides and videos made by locals and travelers alike to the area.
Mad Max: Fury Road: Namibia
Everything about the fourth entry in George Miller's Mad Max franchise was unlikely. It's rare indeed for a post-apocalyptic action flick to be nominated for Best Picture and Best Director at the Academy Awards, and rarer still for the fourth installment in any series to be considered one of the best action films ever made. What makes the quality and success of 2015’s Fury Road all the more impressive is the grueling difficulty of its production, which took place over 120 long days in Namibia.
That was a literal departure from the first three Mad Max movies, all of which were shot in Miller’s native Australia. Fury Road was at one point going to be shot in Broken Hill, a frontier mining town in Australia, but heavy rainfalls led to wildflowers blooming throughout the desert and forced production to move to the South African nation.
It’s now difficult to imagine the film being shot anywhere else: The Namib Desert provided the perfect backdrop for the movie’s never-ending chase sequence, with harsh landscapes full of stark beauty both complementing and contrasting the nonstop action.
Jurassic Park: Kaua’i and the Dominican Republic
Depending on one’s perspective, it’s either a relief or a disappointment that Isla Nublar isn’t real. The fictional island near Costa Rica, where Jurassic Park is set, was primarily brought to life on the endlessly beautiful island of Kaua’i, where Steven Spielberg had previously worked. (Costa Rica was considered early on in the process but ruled out due to the filmmaker’s logistical concerns about setting up production there.)
Hurricane Iniki passed over the island during shooting, setting production back by a day; Spielberg made clever use of this by filming the storm and including shots of it in the final film. Kaua’i wasn’t the only Hawaiian location used, as a few scenes were shot in Oahu, Maui (including the opening sequence), and even the “forbidden island” of Ni’ihau, which isn’t open to outsiders. As with Star Wars, there are several tours of shooting locations you can take — including one that lets you ride in one of those iconic Jeeps.
A significant portion of Jurassic Park was also filmed in the Dominican Republic, namely the Amber Museum in Puerto Plata, including a famous scene involving the mosquito suspended in amber.
Apocalypse Now: The Philippines
Speaking of difficult productions, the making of Apocalypse Now was so notoriously troubled that there’s an entire documentary about it. Principal photography took a full 238 days and involved casting changes, sets being destroyed by Typhoon Olga, and star Martin Sheen having a heart attack; suffice to say that the project went massively over budget. For all that, Francis Ford Coppola’s Vietnam War epic was a huge success and is now considered one of the greatest films ever made — none of which could have happened without the Philippines.
Coppola and his wife Eleanor (who later directed Hearts of Darkness: A Filmmaker’s Apocalypse, the documentary mentioned above) lived in the Dasmariñas Village area of Manila during production, which began in March 1976 and finished reshoots in May of the following year. Apocalypse Now was filmed all across the Philippines. Baler, a fishing town, is where the surfing scenes were shot; it has since become a sought-after destination for surfers across the globe and is even credited with inspiring the country’s surf culture. It's also where a fake Vietnamese village was constructed and later blown up in a simulated napalm attack; not all of the trees burned down have regrown.
Another key location was Pagsanjan, located about 90 minutes southeast of Manila, which is where the Do Lung Bridge was built along Magdapio Falls. In order to film in the Philippines at all, Coppola had to broker a deal with the country’s highly controversial president, Ferdinand Marcos, which allowed production to use military equipment — including helicopters of the same model used during the Vietnam War. It’s that commitment to vérité, whatever the cost, that makes Apocalypse Now loom so large in the annals of film history. | https://editorialstage.traveltrivia.com/5-iconic-movie-filming-locations-you-can-visit-in-real-life/ |
The beautiful landscapes, magnificent forests, awfully deep lakes, foggy and coldest rivers of New Zealand. And cinema. What is the connection between these things? The recent events in the film industry suggest: can cinema influence to the environment? But it is a fact the cinema influenced the nature of New Zealand for sure. But how? To deal with this controversial issue, we need to consider two points of view, one of which says that the movie industry does not influence, and the other - the environment gets some changes.
Yes, the filming influences nature, in particular case New Zealand. Interestingly, to build a real Shire (the village of the hobbits) the workers had to recreate an exact copy of the John Howe and Allan Lee's drawings. That is, the workers planted grass and flowers, built a small pub (functioning as a real pub and restaurant now), and the houses for the hobbits - the holes with big round doors.
The village Hobbiton was built in Matamata. Now this place is one of the most popular sightseeing in New Zealand. Also, the workers along with the shooting crew planted vegetables and trees for better realism. This work began 2 years before filming, because the director Peter Jackson wanted to it was all for real, to the actors feel the atmosphere. And all that was planted grew, grew stronger and changing chosen location, turning it into a fantasy village.
Does this mean that the nature changes under the influence of the movie? Let's look at a different point of view. How was said, the cinema does not influence nature. Why is that? It happens because New Zealand is the country that controls its surroundings, as its unique nature. It concerns many spheres of life including tourism. There is a list of the rules for it. For instance, tourists must rinse their shoes before leaving the airport to foreign soil has not got into the local soil. As for shooting famous film trilogy “The Lord of the Rings” was filmed in the studio and on location. They needed a lot of scenery to shoot a huge amount of scenes. In the second part of the film “The Two Towers” Weta Workshop built the capital of Rohan – Edoras and Theoden Palace on the mountain Sunday, Canterbury. Also, they made it up in 8 months for 10 days of filming only! But at the end of the filming, all decorations were removed and the soil was restored. Under the laws of New Zealand Government, all the shooting places in nature should be returned to its original condition. So the area has preserved its original appearance.
So how beautiful nature and film scenery coexist together, in synthesis? The film industry makes adjustments in the environment of New Zealand anyway. Though a small part of it. And after the shooting, New Zealand Government saved “The Hobbit” scenery as sightseeing. And the changes that brought Peter Jackson's films in the nature of New Zealand's clear now and in the future, they will become even more visible. Especially in two-three years so far after the shooting in 2019 long-expected “The Lord Of The Rings” - a brand new TV-series by Amazon Studios.
#New Zealand #Hobbit #The Lord of the Rings #Movie #Peter Jackson #Amazon studios
Children are cute, funny and hilarious. Express-News shows you 6 funny real life pictures that represent all of the above. If you have children and you want to make their life more interesting, you might want to check the article about yoga for ...
Halloween is coming and many of us are very excited waiting for the main autumn party. Express-News decided to show you the funnies Halloween costumes that might make you laugh as well as inspire you, so that you can be perfectly ready for the ...
The city is in Northern California, less than 30 miles south of San Francisco along the famous Pacific Coast Highway. Half Moon Bay has a historic center, its houses offer scenic views of the ocean, it is surrounded by coastal agricultural land, ...
These ordinary steps in the Highbridge area in the Bronx, which has a reputation for being a not-so-quiet place, have recently become very attractive for those who like to take pictures. Locals are not very happy with the liveliness that has .
Rammstein is not just a cool sounding German word. Everyone who hears that word begin imagine something rude and hard because of their style. The feature is a sweeping everything in its path heavy industrial sound going well with rough German vocal .
Now this year of 2019 we are witnessing the beginning of new Lana Del Rey’s “Norman Fucking Rockwell!” era “Due to strong positions on the music charts Lana Del Rey’s album “Norman Fucking Rockwell”! .
Nickelodeon for the first time in the United States will present a motion picture, which tells about a Mexican-American family of several generations. This is a story about 11-year-old girl Ronnie Annie, her older brother and single mother, who ...
Yevgeny Zamyatin and We, Aldous Huxley and Brave New World, George Orwell and 1984, Ray Bradbury and Fahrenheit 451. And who's next? Did you hear about Lois Lowry? You have to. Guess what we are going to discuss? Possible future. And ...
In the heart of Malibu overlooking the Pacific Ocean, a fabulous Barbie dream house appeared. He is not a toy, but a real life-size three-story mansion created for the 60th anniversary of the Barbie brand on Airbnb. The pink house in the best
Hallowe-e-e-en is coming. It’s time to get ready for being shooked and spooked. October 31st means carving pumpkin and making Jack-o-lantern, dressing up and covering face up to hide from evil spirits, screaming trick or treat and just have ... | https://express-novosti.ru/en/entertainment/18-Cinema-in-fluences-the-nature-of-New-Zealand.html |
Review: The Lord of the Rings: The Rings of Power
Twenty-one years after the release of Peter Jackson’s “The Lord of the Rings” trilogy, the greatest fantasy world of all time makes its return to our screens with a brand new Amazon Prime show, “The Lord of the Rings: The Rings of Power.” The show aired on Sept. 1, 2022 on Prime and quickly became the most watched TV show premiere of all time, bringing in over 25 million global viewers on its first day.
“The Rings of Power” (ROP) takes place in Middle Earth roughly 5000 years before the events of LOTR that Tolkien first wrote about in 1954. The show follows four main plot lines, which later converge into three by the end of the season.
We’re first introduced to a young Galadriel, commander of the Northern Elven Armies who held a prominent role as an Elven queen in “The Lord of the Rings,” and her fight to stop the almighty dark reign of Sauron before Middle Earth devolves into war.
Next, we get to meet the Harfoots – an ancestor race to the Hobbits – and an especially adventurous young Harfoot, Nori.
The other two initial plot lines follow the Men of the Southlands and their struggle against the Orcs, and the Men kingdom of Númenor, who must decide if they are to step outside of the sanctity of their island for the sake of saving Middle Earth from an impending war.
The plot of ROP follows the timeline of Middle Earth during the Second Age as created by Tolkien. The show derives from two main ideas: Sauron’s growing power and influence being a threat to Middle Earth, and the need for the elves to create the rings of power in order to save their race. Outside of this, almost all aspects of the show were either very loosely based on the writings of Tolkien or completely fabricated by Amazon’s writers in order to make the show more interesting and adaptable to a TV format.
One of the most notable changes with ROP compared to the original 2001 LOTR trilogy and subsequent “Hobbit” trilogy is the diversity in the cast. In Peter Jackson’s original adaptation, every single character shown over the lengthy six movies are white. This was an attempt to stay true to Tolkien’s vision of Middle Earth being a continent equivalent to Europe – which in the 1940’s, had very little diversity.
Amazon, however, went against the status quo of racial representation in Middle Earth by having multiple casts of people of color. These include main leads such as the Queen of Númenor being a Black woman, a woman central to one of the plot lines, Bronwyn, being Iranian, and her elf love interest, Arondir, a Puerto Rican man.
While initially being one of the most heavily debated topics surrounding the show upon its trailer release in September of last year, all of the show’s actors are extremely talented and worthy of the roles. Middle Earth, as diverse as it is, should by no means be limited to a single skin color. There is no precedent set stating that all members of the race of Men or Elves must be white; it makes sense for a continent with such geographical diversity to have equivalent diversity in skin color. For this, I truly hope to see continued diversity in casting for future LOTR content.
ROP was a joy to watch from start to finish, but I definitely was left with some heavy criticisms. One of my biggest grievances with ROP was the visuals, or more specifically, the overuse of CGI to create visually grand backgrounds.
One of the most impactful things for me with the original LOTR trilogy was the natural beauty shown throughout; filmed in New Zealand, Jackson showcased just how stunning the country is with its incredible geographical diversity, and had little to no CGI for the landscape. ROP, on the other hand, seems to grab that magic and just strip it away. While still being filmed in New Zealand, I can’t help but feel like the producers at Amazon were too afraid to really use up the beauty of the country, and instead play it safe with more simplistic backgrounds and then just layer on element after element of CGI mountains, waterfalls and forests.
My second and final criticism lies with the overall pacing of the show. In truth, it felt extremely awkward and unbalanced. This was mostly derived from the inconsistency with which plot lines would get screen time in a given episode. For example, on the two episodes that focus almost exclusively on the Harfoots, it felt extremely slow paced, relaxed, and more heavily prioritized character building over actual plot progression. On the other hand, you have the episodes focusing on the Men of the Southlands, where they will literally start and plan a battle against Orcs, wrap up the entire conflict in 10 minutes, then introduce a massive plot element at the very end. This much more rapid paced and action packed feel was starkly contrasted with the mellow lore-building feel nearly every other episode in the season. As I much prefer the more action packed plot line, it made the show feel overall a lot less enjoyable knowing that this week’s episode was going to be boring, and I’d have to wait until next Friday to see anything interesting happen.
“The Lord of the Rings: the Rings of Power” is an overall thrill of a show. It’s almost surreal seeing LOTR making a return to the screens after so long, and we have Amazon to thank for such a respectable and well-crafted prequel to Tolkien’s more well known work. While the show is by no means perfect, it has the potential to be one of the greatest shows of our time, so long as Amazon Prime continues to learn from their mistakes, improve along the way, and stay somewhat true to what Tolkien wrote into history seven decades ago.
Ethan is a senior and Assistant Editor of the News section. This is his second year on the Gazette staff. | https://granitebaytoday.org/review-the-lord-of-the-rings-the-rings-of-power/ |
Welcome to 100% Middle-earth, 100% Pure New Zealand.
This video message from NZ Prime Minister and Minister for Tourism Rt Hon John Key also includes a small taste of New Zealand resources available for media covering the world première of The Hobbit: An Unexpected Journey.
Imagery of locations chosen for The Lord of the Rings trilogy and The Hobbit: An Unexpected Journey.
Tourism attractions in filming locations for The Lord of the Rings and The Hobbit trilogies.
Innovative Kiwi businesses that support the New Zealand film industry including Weta Digital and Shotover Camera Systems.
Interviews with personalities involved in New Zealand’s tourism, business and film industries including Sir Richard Taylor of Weta Workshop, Bill Reid of Reid Helicopters in Nelson who flew for Sir Peter Jackson during filming of The Lord of the Rings and The Hobbit, and Halfdan Hansen of Jens Hansen Jewellers - creators of the ‘One Ring’.
Content and story angles are available on this website.
Images - for editorial coverage - are available on the TNZ Image Library.
Hei akuanei - we look forward to seeing you soon. | https://media.newzealand.com/en/story-ideas/middle-earth-media-resources/ |
Born Karl-Heinz Urban has an estimated net worth of $80 million. Urban, known for his role as Eomer in “The Lord of the Rings” trilogy is an from New Zealand . He has earned his net worth from his appearances both in TV and in films since 1990 until currently. His first film appearance was on an episode in New Zealand television series Pioneer Woman when he was eight year-old. He went on to appear in several commercials and guest roles in TV. From 1996 until 2001, he played recurring roles in the internationally syndicated American/New Zealand TV series Hercules: The Legendary Journeys and on its spin-off Xena: Warrior Princess. His first international film was Ghost Ship in 2001. He has since appeared on high-profile films including in the third installment of the Lord of the Rings trilogy as agent Kirill, The Chronicles of Riddick, Star Trek and Doom. His next film appearance will be in the ensemble thriller The Loft, a remake of the Belgian film of the same name slated for release on August 29, 2014. | http://www.bornrich.com/karl-urban.html |
Genetics will increasingly enable health professionals to identify, treat, and prevent the 4,000 or more genetic diseases and disorders that our species is heir to. Genetics will become central to diagnosis and treatment, especially in testing for predispositions and in therapies. By 2025, there will likely be thousands of diagnostic procedures and treatments for genetic conditions. Genetic diagnostics can detect specific diseases, such as Down’s syndrome, and behavioral predispositions, such as depression. Treatments include gene-based pharmaceuticals, such as those using antisense DNA to block the body’s process of transmitting genetic instructions for a disease process.
In future preventive therapies, harmful genes will be removed, turned off, or blocked. In some cases, healthy replacement genes will be directly inserted into fetuses or will be administered to people via injection, inhalation, retro viruses, or pills. These therapies will alter traits and prevent diseases. Although genetics will be the greatest driver of advances in human health in the twenty-first century, it will not be a panacea for all human health problems. | https://graduateway.com/human-genetics/ |
Last time, I wrote about how gene therapy is being used to fix certain kinds of errors in DNA, and so cure or significantly reduce certain kinds of cancers. I asked whether you would accept such a treatment for yourself.
It’s one thing to accept the risks that are associated with these still new and experimental treatments, but would you make that decision for a sick child—one of your children?
I didn’t make a point of it last time, but two of the diseases for which this kind of gene therapy is now available, B-cell acute lymphoblastic leukemia (ALL) and junctional epidermolysis bullosa (“butterfly skin”), are both childhood diseases. The children who received these treatments either were not responding to existing treatments anymore or there were no other treatments available. These kids’ parents were left with the choice of trying these experimental approaches or watching their children die. I think most parents wouldn’t consider that much of a choice.
The FDA has also approved a gene therapy for adults with non-Hodgkin lymphoma, another kind of cancer.
But what if the situation was not so dire? Doctors know, for example, that sickle cell disease is caused by a “single nucleotide polymorphism,” in plain English, an error in just one amino acid in one gene in a person’s DNA. The sickle cell mutation, which has to be inherited from both parents, is not usually fatal in children, although it can shorten the lifespan of someone who suffers from the disease. It’s treatable, but until now has only been curable by a bone marrow or blood stem cell transplant. Gene therapy might be able to change that. In fact, CNN has reported a complete cure in a very ill teenager in France.
But now things get more complicated. What if the therapy could be applied to a child before they were born? Pre-natal screenings can identify certain kinds of illnesses and conditions in the developing fetus. And there are some surgical procedures that are done in the uterus.
Could a gene therapy treatment even be done in this situation? Some current therapies use the subject’s own immune cells, tweaking them to recognize and attack cancerous cells, for example. However, a baby’s immune system is not fully developed at birth, so it seems like this kind of approach would be limited to certain diseases or conditions where the right kinds of cells were available enough and mature enough to be used.
Other gene therapies use special viruses to deliver their genetic payloads. Scientists have a pretty good idea how post-natal bodies, especially adult ones, respond to these viruses, but I wonder if anyone has much of an idea about how a fetus would react. And how would one ethically test such a therapy, or even how the fetus and the mother would react to the presence of the virus?
For couples using in vitro fertilization, the egg and sperm cells, before fertilization, and the earliest forms of the fetus, such as the blastocyst, are available outside the mother’s body. It’s possible to test the genes of the blastocyst before implantation.
In the US and other Western countries, doctors agree that creating “designer babies,” that is, babies whose genes are modified to produce certain desired characteristics, is unethical.
But a designer baby is very different from one whose parents’ genes have destined that child for some kind of serious disease or condition before or after birth. Scientists in China and the UK have demonstrated that they can make genetic changes at this stage of development, but none of the fetuses so modified allowed to continue to develop beyond a very early stage, where they still look like just a ball of cells.
In the future parents will face some big decisions if a genetic test reveals a fixable abnormality, beyond the decision of whether to abort the pregnancy or not. Every choice will entail risks and doctors may not be able to say exactly how much risk each option holds.
Good parents want to do what’s best for their children. What will they decide? How will they decide? What social pressures will they face to either agree to the treatment or refuse it?
If you’re a parent, or want to be someday, what would you do? Please leave your thoughts in the comments box below. | https://www.rossblampert.com/2018/10/03/if-genetic-engineering-could-cure-your-child-would-you-use-it/ |
A Design for Life: is scleroderma in the genes?
What is the link between our genetic code and developing scleroderma? Researchers from Spain have recently studied genetic alterations and established a connection with the pathways that dictate the clinical features of scleroderma. The study is significant because increasing our understanding of the genetic pathway could be a faster route towards identifying effective new therapies for this complex condition.
Our DNA is our molecular design for life, which provides the instructions for us to fulfil our biological potential. In some medical conditions it is well-documented that certain small changes in a person’s genes may make them more likely to develop that particular condition e.g. breast cancer. However, whilst genes have been identified that are associated with scleroderma, it is less clear how these influence and cause the condition to develop and worsen. A recent study from a group in Spain has examined genetic alterations in individuals with scleroderma, mapping them to the pathways which dictate the clinical features of the condition.
The findings of this work are encouraging, since increasing our understanding of the genetic pathway of scleroderma may well open avenues for new and targeted therapies in the not-too-distant future.
Those living with systemic sclerosis will agree that it is a complex condition, where no two cases are exactly the same. Part of the variation in experiences results from the differing expression of the three hallmarks of scleroderma; fibrosis caused by the excessive production of collagen, autoimmunity and damage to the small blood vessels. To better understand the interplay between these factors and therefore develop more effective treatments, scientists must examine the route of the condition – the genetic code.
Our genetic code is formed from DNA, a highly complex molecule found in every cell of the body (with the exception of mature red blood cells). At its core, DNA is made from four molecules (known as bases), each referred to by a letter: A, C, G or T. These bases are strung together in a long chain and housed on strand-like structures called chromosomes, found within the nucleus.
Genes are comparatively small sections of DNA, which are considered to be units of inheritance – determining how certain characteristics like eye colour, hair colour and even your risk of developing medical conditions are passed down through generations. Genes instruct our cells to make ‘proteins’ needed for growth, repair and other specialised functions in a process known as gene expression. Whilst each cell contains exactly the same DNA, how this is expressed differs between cell types, as different genes are switched ‘on’ and ‘off’ altering which proteins are made in each cell.
This variation in the proteins produced enables cells to be specialised for specific purposes. For example, B cells are a type of cell found in the blood that make antibodies capable of fighting infections. The fibroblasts found in our skin and connective tissues make an entirely different molecule, collagen, which is needed for strength. Too much collagen can cause the ‘stiffness’ characteristic of scleroderma. Both cell types contain the same DNA but switch on and off certain genes leading to the production of different molecules.
Previous research has unearthed some of the specific genes believed to be linked to scleroderma, however this in itself does not offer enough information about the pathway from genes to symptoms, which is essential in the development of new treatments.
Single Nucleotide Polymorphisms – Spot the difference!
In addition to linking genes to medical conditions, recent scientific research has identified the importance of single nucleotide polymorphisms (SNPs) in the causation of autoimmune conditions. SNPs are small, single base changes within the DNA sequence associated with a particular medical condition. They are commonly identified through what is known as a Genome Wide Associations Studies (GWAS). This method whilst sounding complicated basically involves playing ‘spot the difference’ between the DNA of individuals with a certain medical condition and a ‘healthy’ control group; i.e., looking for key changes in individuals with a condition but not in those without it. Many GWAS have been carried out within cancer research and in relation to autoimmune conditions such as lupus and rheumatoid arthritis identifying changes associated with these conditions (2), (3). More recently, these studies have been carried out in scleroderma.
Less than 2% of our DNA is responsible for forming the blueprint for our bodies. The rest is non-coding, affectionately known as ‘junk’ DNA. Most SNPs identified by GWAS occur in these non-gene coding regions of DNA, meaning that their role in the mechanisms underpinning disease development is unclear. Recently, work from a research team based at the University of Granada in Spain sought to understand the impact of SNPs associated with scleroderma and their link to the immune responses, fibrosis and the blood vessels abnormalities, by understanding how these SNPs affect gene expression, in a process called an eQTL analysis.
Can SNPS help us develop treatments to ‘SNP’ scleroderma in the bud?
The study compared DNA taken from the white blood cells of those with scleroderma and healthy controls to identify relevant eQTLs, before mapping these to the genes whose expression they regulated. Their analysis identified 64 eQTLs specific to scleroderma which mapped to 134 eGenes associated with the three hallmarks of scleroderma mentioned earlier: 122 linked to immune cell responses, 27 to fibrosis, and 16 to blood vessel abnormalities. The team found an overlap between these categories, which is not surprising given that these features are closely linked, re-enforcing how complex the pathology of scleroderma is.
The work could lead to a greater understanding of the pathways underpinning disease, potentially accelerating the development of targeted therapies. The research team examined whether there were any ongoing clinical trials testing drugs that target the protein products of the relevant eGenes – and seven were identified. This opens up the possibility of drug repurposing for scleroderma treatment – a more time and cost-effective way of introducing a new therapy to those in need. Additionally, the 64 eQTLs that are specific to those with scleroderma indicate additional mechanisms activating eQTLs in disease, suggesting potential treatment routes if these pathways can be illuminated.
In the journey to more precise treatments, more in-depth study of specific eQTL and eGene targets will be necessary, however the results from this study are an optimistic first step towards potential new and effective treatments for people living with systemic sclerosis. | https://www.sruk.co.uk/about-us/news/design-life-scleroderma-genes/ |
Scientists are making important progress in the battle against a class of devilishly complex human pediatric brain cancers thanks to a new study from a team of Florida State University students and faculty.
Among young children, there’s no brain tumor more common than medulloblastoma. But no specific and effective therapy yet exists for this dangerous disease. Instead, doctors are forced to resort to onerous and invasive treatments like surgery, radiation and chemotherapy, often at the expense of the child’s quality of life.
Medulloblastoma, which is divided into four subgroups, is partially caused when a mutation occurs in the “driver genes” that either promote or suppress cancerous tumor growth. These mutations can be inherited, sporadic or environmentally induced, but once they appear, they increase the risk for the unfettered and abnormal cell division that leads to malignant tumors.
A team of FSU researchers, led by Professor of Chemistry and Biochemistry Qing-Xiang “Amy” Sang, was interested in learning more about these mutations. Using data from the Catalogue of Somatic Mutations in Cancer, they identified a series of cancer-causing driver gene mutations and discovered that medulloblastoma is perhaps an even more dynamic and variable tumor than expected.
Their findings were published in the Journal of Cancer.
“Most cancer is quite heterogeneous, but medulloblastoma is specifically very heterogeneous,” Sang said. “If you look at the driver gene mutation, it’s not as if the majority of medulloblastoma cases have the same mutation. In reality, 5 percent may have one mutation, 3 percent may have another mutation and a small percentage may have other mutations. That’s why you cannot treat it as one disease.”
Using advanced bioinformatics tools, the team was able to pinpoint which driver gene mutations were occurring in which medulloblastoma subgroups. In some cases, they found that mutations once considered specific to one particular subgroup were causing significant disruption in sister subgroups as well. While these findings were surprising, they were exactly the kind of counterintuitive details the team was searching for.
“What we focused on specifically in this paper are the driver genes that we weren’t expecting to see,” said study co-author Mayassa Bou Dargham, a doctoral candidate at FSU. “We wanted to focus on some infrequent events and stress the heterogeneity of medulloblastoma tumors themselves. That’s important whenever we’re using targeted therapy for different subgroups.”
Medulloblastoma’s heterogeneity makes it an exceptionally difficult cancer to characterize and treat. But with a more comprehensive and nuanced understanding of which mutations happen where and when — and which mutations might defy broadly accepted definitions — researchers will be better equipped to identify opportunities for targeted, individualized treatments.
“For medulloblastoma, a more personalized approach will have to happen,” said Jack Robbins, who was an undergraduate when he co-authored the study. “The goal we should be striving for is more MATCH-based trials in which we use molecular targets found from these different panels of driver events. These driver events extend past the genomic code and into epigenetic mechanisms that need to be further studied and assessed in the clinic to identify candidate therapies. We can hopefully give those therapies to patients who aren’t responding to the standard of care treatments.”
The next step in developing those therapies is to develop credible laboratory models of human medulloblastoma tumor subgroups. These models, researchers say, will be important evaluative tools in the search for potential therapeutics.
The ultimate goal is a regimen of targeted therapies that avoid causing undue burden to vulnerable pediatric patients.
“Children with cancer often receive very toxic, harsh and invasive treatments,” Sang said. “If we can avoid those harsh treatments and develop safer and more efficacious therapies, then the patients’ outcomes and their quality of life will be much improved.”
Cross-disciplinary collaborations may be a key to finding more effective therapies for this intractable disease. But another crucial key, Sang said, will be innovative ideas from a new generation of ambitious researchers.
She said this paper demonstrates the instrumental and field-defining contributions of student scientists. In addition to Robbins, former FSU undergraduates Kevin Sanchez and Matthew Rosen co-authored the paper. | https://www.innovitaresearch.com/2019/02/04/fsu-team-breaks-new-ground-in-study-of-malignant-pediatric-brain-tumor/ |
When one thinks of the most popular and useful animal models in biomedical research, one thinks of mice and rats, followed by rabbits, dogs, monkeys, and so on. The domestic cat is traditionally a long way down the list. But a recently developed cat genome reference assembly promises to push the cat up the charts.
Despite the rapid escalation of our prowess to sequence, the application of genomic insights in medicine is not routine in healthcare, for good reason. Not only do we need accurate and complete sequences, but we must know what changes in sequences mean in the context of biological function before healthcare can adopt genomic medicine—where treatments are directed at underlying genetic causes in an individual patient or a patient population.
Focusing solely on the framework of the human genome is not in the best interests of precision medicine. Comparing conserved sequences in multiple species and exploring what single nucleotide variations (SNVs) and structural variations (SVs) mean for the biology and disease of other species are instrumental in developing precision therapeutics. High-quality reference genomes, particularly of species where genomes are conserved and follow the same order of genes as in humans, are indispensable in understanding the biological impact of genetic variations.
Cats are an asset in identifying disease-causing genetic variations. Leslie Lyons, PhD, professor at the College of Veterinary Medicine, University of Missouri said, “Whole genome sequencing in any species to find causal DNA variants is only successful about 50% of the time. We’re plagued by variants of unknown significance (VUSs). The variant may be staring us in the face, but we don’t understand that it is a significant variant and causes disease. We’re hoping cats will help in that regard because if the cat has the same variant as in human and does not have a health problem, maybe that will lower the priority of many variants that we see in humans by cross-species comparison.”
New cat genomic resource
The domestic cat is not only a household pet and companion, it also serves as a secondary source or “bioproxy” to corroborate human conditions and evolutionary events. Like humans, cats suffer from cancer and a range of common and rare diseases, including neurological disorders.
Last year, a consortium led by scientists at the University of Missouri developed a new cat genome reference assembly from long-read sequences of the entire genomes of 54 domestic cats, annotating the landscape of feline SNVs and SVs in the context of human genes. The domestic cat belongs to the family Felidae, in which there are some 40 different species with conserved genetic makeup. The availability of a variety of hybrid cats may lead to more accurate genome assemblies. This resource holds the potential to identify novel variations responsible for physiological traits and pathological conditions and facilitate the expansion of genomic medicine in both animal and human healthcare.
“We’ve made a huge leap forward,” said Lyons. “By having the new haploid-based phased genome assembly, the cat is now second to none. A lot of researchers might not have focused on the cat because they didn’t have the genome tools needed to study the cat. Well, now you do!” Lyons says the cat now warrants consideration alongside a mouse or a rat. “You want to use the right biomedical model for the right disease. Long-read sequencing technologies are now giving us the opportunity to do that,” she added.
Wesley Warren, PhD, professor of genomics at the University of Missouri and senior author on the 2020 PLOS Genetics paper that reported the cat reference genome assembly, said, “The study of many species offers insight into traits of interest but the domestic cat, in cohabitating with us and sharing a similar fate for some diseases, represents a unique model to advance genomic medicine objectives that benefit the cat and human.” This species-wide collection of felid genome resources that match the human reference in quality “offers up exciting opportunities to study these diseases in the cat.”
Cathryn Mellersh, PhD, senior research associate at the department of veterinary medicine, University of Cambridge, said, “The community-based 99 Lives Cat Genome Sequencing Consortium has directly facilitated significant molecular discoveries, including the identification of novel variants in genes previously associated with disease, novel gene-disease associations, and even novel diseases, all of which have important implications for our understanding of biological processes and disease in all species, including our own.”
Right model for disease research
Scientists are sometimes reluctant in accepting new models, lamented Lyons. Much of the research has consequently focused on a few ubiquitous models, from E.coli to yeast to Drosophila to multiple lines of inbred mice.
In a forum titled, “Cats–telomere to telomere and nose to tail,” and published in Trends in Genetics, Lyons reinforced the use of cats as valuable model organisms for translational research in inherited diseases and the development of precision medicine therapies for both veterinary and human healthcare.
“Using cats in research is really overlooked since people don’t realize the advantages,” said Lyons. “The dog or mouse genome have rearranged chromosomes that are quite different from humans, but the domestic cat has genes that are about the same size as humans.” It is also very well organized and conserved.
Synteny—a term that describes the physical conservation of blocks of genes in order—is higher between humans and cats than say mice or dogs. Synteny, Lyons believes, is a major facet in characterizing the function of a new gene or intergenic region.
No model is perfect, but some models are better than others for specific questions. “We want to promote more effective models for translational medicine. Now that we have better genomic resources, maybe it is better sometimes to use the cat than mice because the translation to humans would be more efficient and more appropriate,” said Lyons.
Using feline models can help researchers refine, reduce and replace animal models in their studies, Lyons emphasized. As cats are larger models than mice, they generate more attention in terms of regulatory guidelines. “We might be able to use fewer cats more efficiently than we would have used mice because it’s the right model for the right disease,” she said. “Working with primates is expensive, but a cat’s affordability and docile nature make them one of the most feasible animals to work with to understand the human genome.”
“Lyons is correct to draw attention to the fact that the cat research community, although small compared to the research communities associated with other domesticated species, is ‘perfectly formed’ and has certainly packed a punch in terms of developing genome tools for the domestic cat,” said Mellersh.
Illuminating dark matter
Cat genomics can be particularly insightful in understanding the functional significance of non-coding sequences, levels of conservation suggesting biological significance. Cats, like humans, also have genetic diseases associated with this genomic dark matter, which contains regulatory elements and three-dimensional structures that regulate genes. Lyons said the relative similarity of the feline genome’s organization to humans makes it a good model to identify regulatory elements.
“If 50% of causal variants are within the genes, where is the other 50%? It must be in the ‘junk’ DNA, we’re realizing is quite important. We’re hoping cats can help us decipher how the upstream regulatory elements of genes and elements that are far away from genes are causing some of our biology and our health concerns.”
Tools of the trade
The first cat clone, Cc, short for CopyCat, was generated in 2001. Her donor was a typical calico cat with black, orange, and white fur, but Cc didn’t have any orange on her coat, defying genetic principles and indicating her coloration did not get reprogrammed during embryogenesis. This clue has led to a better understanding of X-inactivation and methylation.
Genomic and transgenic technologies are also allowing the use of cats to better understand evolution, domestication, and adaptation. “We have many of the required resources to study different aspects of genetics in cats. We can do whole genome sequencing. We have an exome capture array and high-density DNA array through Affymetrix,” said Lyons.
Lyons foresees the development of imputation techniques where low-depth genome sequencing could estimate features of the rest of the cat’s genome. With adequate funding, she hopes more sophisticated techniques such as long-read RNA sequencing, single nuclei technologies, and an atlas of cat gene expression, could be used to further analyze the cat genome.
“Cats have less focus from NIH funding but the community is cohesive, efficient, and collaborative,” Lyons said. “We’ll bend over backward for whatever you need for cat genomics. We hope others will join us.”
Lyons and her team minimized the number of cats while maximizing the potential models they could provide. “We have cats that have inherited blindness and polycystic kidney disease (PKD). Also, we cryopreserve. For our cat models for inherited blindness, we’ve saved embryos and semen, so that we don’t have to have cats living in cages anywhere, but we can resurrect models when needed.”
In a 2020 Cell Report study, Lyons’ group resurrected a cryopreserved cat model for Chediak-Higashi syndrome and defined the causative mutation—a large gene duplication.
Toward tailored healthcare
Cats, if adopted as a research model, can play a major role in precision medicine. Sequencing cats suffering from genetic diseases can help identify genetic causes and develop more effective treatments that would be potentially applicable in humans, given the genomic similarity in structure and sequence.
“We can provide a more tailored healthcare program for our pets, and more funding would put all the different pieces into place. All mammals tend to have very similar genes, so if we find out what causes a disease in cats, then whatever therapies can be used to help cats can potentially be translated to help humans suffering with the same disease. Likewise, human research can potentially be translated to help animals as well,” said Lyons.
Lyons’ group is currently working on feline models of PKD, developing methods to prevent the growth of cysts in cat models of PKD using a ketogenic diet. PKD treatments for cats could apply to humans, said Lyons.
Interestingly, PKD is more common than several well-known genetic disorders, such as sickle-cell disease or cystic fibrosis. It usually affects patients later in life; they experience renal failure in their 50s and 60s, and a shortened lifespan. “A ketogenic diet might be helpful for reducing the size of the cyst in the kidney. Then it won’t destroy the normal kidney tissue and you won’t go into renal failure. The idea is to keep those cysts as small as possible, maybe shrink them,” Lyons explained.
The average lifespan of mice is 2–3 years making them an unsuitable model to test long-term treatments at later stages. Cats with longer lifespans should be more suitable.
“Secondly, the kidney in the mouse is so small, it is hard to evaluate cyst volume and ascertain whether any of your treatments are actually reducing cyst volume whereas the cat kidney is big enough. You can see those changes. We can do MRIs and CTs in a regular vet clinic,” Lyons said. | https://www.genengnews.com/insights/cat-in-the-lab-feline-genomes-fuel-precision-medicine/?utm_medium=newsletter&utm_source=GEN+Daily+News+Highlights&utm_content=01&utm_campaign=GEN+Daily+News+Highlights_20210729&oly_enc_id=6899J6326067C2A |
‘Computational’ biology may be the future of medicine
The future of biomedical research is looking brighter and brighter.
Researchers are using increasingly sophisticated tools and algorithms to map, track and map, and analyze data.
These algorithms are called “computation” or “comparative” in the medical community, which is why the term is often used.
But the field is actually far more than that.
There are a lot of tools in the pipeline that are combining computation with biology and biology with computation, and there is a lot to be learned from these.
Here are 10 computational biology tools and applications that could change the way we do science in the next 10 years.
How to use DNA to define and characterize biological cells
In the early 1970s, scientists discovered that living cells had distinct DNA sequences.
These DNA sequences could be mapped onto proteins, making it possible to determine which parts of the cell were made up of each particular protein.
However, this method was not widely used at the time.
It’s one of the reasons we know that many proteins contain a DNA-binding protein called histone acetyltransferase (HAT), which is responsible for the formation of histone tags.
This tagging process requires the histone molecule to be chemically bound to a specific histone base.
DNA also acts as a molecular scaffold, providing a framework to allow the binding of different proteins to a particular DNA strand.
Using DNA-based technology, scientists have been able to use the same technique to identify and label various biological components, including DNA-containing proteins, histones, and ribosomes.
As a result, we can more accurately define biological functions and determine how these functions are achieved.
This article looks at how to use this technology in the laboratory to map DNA into specific biological elements.
DNA-Based Biomimetics DNA-specific biosynthesis is a technique that uses enzymes called “sequencers” to extract specific genes from living cells.
Using the technique, we now have the ability to generate many different kinds of biological cells, from stem cells to human tissue.
One approach to developing the technique involves the use of “in situ hybridization” (ISHI), which involves exposing cells to a fluorescent protein that causes them to glow.
Then, the researchers then use these fluorescent proteins to isolate the specific genes.
This process can also be used to determine the structure and function of specific proteins in living cells, and to study the effects of DNA-related genes on specific biological processes.
As we’ve seen in the past, in situ hybridizations can also help us identify which DNA-associated proteins are involved in specific biological functions.
This technique can be used in a number of different ways, and it can allow scientists to identify the proteins involved in many different biological processes, including cancer and immune responses.
One method of using ISHI is to use RNA-seq methods to analyze the DNA of living cells in order to determine their structure.
This method has the advantage that we can analyze the RNA content of cells in real time and identify the specific proteins involved.
RNA-Seq techniques can also identify DNA-encoding regions that contain RNA.
RNA can be thought of as a “tag,” a genetic code that tells a cell when it is alive or dead.
We’ve seen that in a few different ways in the lab: when cells are alive, they can produce a protein called a “c-terminal” sequence.
When cells are dying, they produce a “s-terminus” sequence, which is a different protein called the “t-termini.”
When cells express the “b-terminals” and “t” in their RNA, the mRNA is called “transcripts.”
When we insert DNA into living cells and analyze its RNA content, we’ll typically find DNA-tagged regions that have a particular sequence.
This allows scientists to map specific DNA sequences to specific proteins, and we can then look at how these proteins interact with other proteins in the cell.
For example, when cells express a specific sequence of DNA that has a “b” at the beginning of it, the DNA will also have the same sequence of a “g.”
In this way, we know what the protein is doing by studying the DNA sequence.
For this reason, RNA-sequencing can also allow researchers to use “in silico” techniques to identify specific DNA-targeted proteins in a given cell.
In silico techniques involve using a protein to “tag” a specific protein to a desired target.
For instance, in silico methods can be utilized to identify whether certain proteins interact differently with certain types of bacteria.
In one example, we might be able to identify if certain bacteria can cause specific cancers, by looking at the DNA sequences of certain bacterial proteins that were previously identified as being associated with specific types of cancers.
Another example might be looking at how certain proteins react to certain types.
For examples, some bacteria can be able that can cause inflammation in the colon, and others can not.
To determine which types of proteins are responsible for causing inflammation in a particular cell, we need to determine whether or not certain proteins in particular bacteria are able to cause inflammation.
This is what RNA-seq techniques do.
By using RNA-SEQ techniques to isolate specific DNA sequence, we are able, for the first time, to identify proteins involved with specific biological activities.
This approach also provides a new tool for studying how these biological functions are mediated.
In fact, RNA sequencers can also play a role in the development of novel cancer therapies.
This type of technology allows scientists in the future to develop cancer therapies using a number different types of biomimetic approaches. | https://taratomicevic.com/tag/biological-approach/ |
Halting nerve damage
Study focuses on nerve cells.
by Mary E. King, PhD
A new study has provided more insight into the damage to nerve cells that causes many symptoms of multiple sclerosis and has developed ways to identify therapies that could potentially halt the damage.
One of the hallmarks of MS progression is damage to nerve cells in the brain and spinal cord. Nerve cell injury causes a variety of symptoms such as numbness, tingling, difficulties walking, cognitive changes and other issues that affect overall quality of life as well as regular daily activities. While some nerve damage is caused by abnormal immune response that causes inflammation, with critical roles played by T and B immune cells, additional factors have also been implicated, particularly in progressive MS.
Katerina Akassoglou, PhD, senior investigator at Gladstone Institutes and professor of neurology at the University of California, San Francisco, and her team recently provided a better understanding of the type of cells and the cellular activities that cause this damage. The team developed a way to identify drugs that could limit or block it.
Akassoglou, Andrew S. Mendiola, PhD, a National Multiple Sclerosis Society Postdoctoral Fellow in Akassoglou’s laboratory, and colleagues described how specific immune cells that typically reside in the brain are activated to release reactive oxygen species (ROS), toxic substances that damage nerve cells and myelin in a process called oxidative injury. The researchers refer to these cells as “toxic immune cells.” The researchers created a directory of the toxic immune cells in the spinal cord that contribute to killing nerve cells, Akassoglou says. Then they developed laboratory procedures to help them identify known therapeutic agents that might stop or slow the process. They found that one of those agents stops immune cells from producing toxic substances and prevents nerve damage in an animal model of MS.
Like much of today’s medical research, this project involved a large number of collaborators; in this instance from the Gladstone Institutes, the UCSF Weill Institute for Neurosciences and the Small Molecule Discovery Center (SMDC) at UCSF and Baylor College of Medicine. Mendiola and Jae Kyu Ryu, PhD, assistant adjunct professor of neurology at UCSF and also a former MS Society postdoctoral fellow in Akassoglou’s laboratory, are co-lead authors of the April 2020 paper in Nature Immunology that details the procedures and results.
Mendiola explains that the researchers started with developing a new specialized technology to identify toxic immune cells that release the substances that damage nerve cells and analyze their genetic codes. They were able to determine which of the cells’ genes are “on” or “off” during this process. They hoped this process would help them identify treatments that could target these specific genes and, in this way, slow or stop the damage to nerve cells in MS.
“Surprisingly,” Mendiola emphasizes, “we discovered that only one small group of cells — one subtype of a commonly occurring brain cell called microglia — are the ‘toxic cells’ responsible for most of this damage.” The work was done initially in mouse models of MS called EAE, and it was confirmed in human brain tissue from autopsies of individuals with progressive MS.
The next step was to use a screening procedure in microglia cells in the laboratory. They checked 1,907 chemicals that researchers identified as potentially able to block the genes they think are involved in producing the harmful substances. Further testing narrowed the list to 128 promising chemicals. They honed in on one particular agent, acivicin, to test in animal models of MS. The screen was funded by a Society FastForward grant to Akassoglou and Michelle Arkin at UCSF SMDC.
“Acivicin is a drug that has been used in cancer but not MS,” Mendiola explains. “One way it may help stop the nerve damage is by interrupting the normal breakdown of the natural antioxidant glutathione that is made in the brain.” The antioxidant has chemical properties that may allow it to destroy the harmful substances released by toxic immune cells before nerve cell damage can occur.
Ryu and his colleagues tested acivicin and were excited to discover that it prevented the development of MS-like symptoms in two different EAE mouse models. First, acivicin blocked the development of EAE in genetically predisposed mice that had not yet developed symptoms of the disease. Secondly, it also prevented relapse in anothegroup of mice that had a chronic, longer-term form of EAE. In this experiment, the control mice, which did not get acivicin, got sicker, but the mice receiving acivicin did not.
Acivicin itself may not be a promising therapy in MS because of its known severe side effects when used as a cancer treatment. However, the work has demonstrated an exciting new target for the development of new safe therapies to preserve glutathione and block oxidative injury in MS. The team also discovered other small molecules targeting pathways relevant to MS to test in future studies. Their study also introduced a novel approach to identify agents that can protect nerve cells and could slow or stop the progression of MS.
Both scientists are excited about the possibilities of future research, including finding ways to selectively eliminate toxic immune cells from the brain and identifying safer compounds that block oxidative damage to nerve cells. Akassoglou also stresses that all of the data for genes and drugs from the recent study are available in an open-source format so that the research community as a whole can use these novel approaches to target this type of nerve damage not just in MS but in other neurodegenerative diseases as well. | https://momentummagazineonline.com/halting-nerve-damage/ |
The question most of genetics tries to answer is how genes connect to the traits we see. One person has red hair, another blonde hair; one dies at age 30 of Huntington’s disease, another lives to celebrate a 102nd birthday. Knowing what in the vast expanse of the genetic code is behind traits can fuel better treatments and information about future risks and illuminate how biology and evolution work. For some traits, the connection to certain genes is clear: Mutations of a single gene are behind sickle cell anemia, for instance, and mutations in another are behind cystic fibrosis.
But unfortunately for those who like things simple, these conditions are the exceptions. The roots of many traits, from how tall you are to your susceptibility to schizophrenia, are far more tangled. In fact, they may be so complex that almost the entire genome may be involved in some way, an idea formalized in a theory put forward last year.
Starting about 15 years ago, geneticists began to collect DNA from thousands of people who shared traits, to look for clues to each trait’s cause in commonalities between their genomes, a kind of analysis called a genome-wide association study (GWAS). What they found, first, was that you need an enormous number of people to get statistically significant results — one recent GWAS seeking correlations between genetics and insomnia, for instance, included more than a million people. Second, in study after study, even the most significant genetic connections turned out to have surprisingly small effects. The conclusion, sometimes called the polygenic hypothesis, was that multiple loci, or positions in the genome, were likely to be involved in every trait, with each contributing just a small part. (A single large gene can contain several loci, each representing a distinct part of the DNA where mutations make a detectable difference.)
How many loci that “multiple” description might mean was not defined precisely. One very early genetic mapping study in 1999 suggested that “a large number of loci (perhaps > than 15)” might contribute to autism risk, recalled Jonathan Pritchard, now a geneticist at Stanford University. “That’s a lot!” he remembered thinking when the paper came out.
Over the years, however, what scientists might consider “a lot” in this context has quietly inflated. Last June, Pritchard and his Stanford colleagues Evan Boyle and Yang Li (now at the University of Chicago) published a paper about this in Cell that immediately sparked controversy, although it also had many people nodding in cautious agreement. The authors described what they called the “omnigenic” model of complex traits. Drawing on GWAS analyses of three diseases, they concluded that in the cell types that are relevant to a disease, it appears that not 15, not 100, but essentially all genes contribute to the condition. The authors suggested that for some traits, “multiple” loci could mean more than 100,000.
The reaction was swift. “It caused a lot of discussion,” said Barbara Franke, a geneticist at Radboud University in the Netherlands who studies attention deficit hyperactivity disorder (ADHD). “Everywhere you went the omnigenic paper would be discussed.” The Journal of Psychiatry and Brain Science did a special issue just of response papers, some of them taking exception to the name, some saying that after all it was just an expansion of earlier ideas. A year on, however, the study has been cited more than 200 times, by papers whose subjects range from GWAS data to individual receptors. It seems to have encapsulated something many people in the genomics community had been turning over in their minds. But exactly what scientists should do with its insights depends on whom you talk to.
An Infinity of Small Effects
The origin of the idea lies in a very simple observation: When you look at the portions of the genome that GWAS findings have flagged as significant to individual traits, they are eerily well-distributed. Pritchard and his colleagues had been studying loci that contribute to height in humans. “What we realized was that the signal for height was coming from almost the whole genome,” he said. If the genome were a long string of ornamental lights, and every DNA snippet linked to height were illuminated, more than 100,000 lights would be shining all the way down the string. That result contrasted starkly with the general expectation that GWAS findings would be clustered around the most important genes for a trait.
Then, while looking at GWAS analyses of schizophrenia, rheumatoid arthritis and Crohn’s disease, the researchers found something else unexpected. In our current understanding, disease often arises because of malfunctions in key biological pathways. Depending on the disease, this might lead to the overactivation of immune cells, for example, or the underproduction of a hormone. You might expect that the genetic loci incriminated by GWAS would be in genes in that key pathway. And you’d expect those genes would be ones used specifically in the types of cells associated with that disease: immune cells for autoimmune diseases, brain cells for psychiatric disorders, or pancreatic cells for diabetes, for instance.
But when the researchers looked at disease-specific cell types, an enormous number of the regions flagged by GWAS were not in those genes. They were in genes expressed in nearly every cell in the body — genes doing basic maintenance tasks that all cells need. Pritchard and his colleagues suggest that this manifests a truth that is perhaps not always taken literally: Everything in a cell is connected. If incremental disruptions in basic processes can add up to greatly derange a trait, then perhaps nearly every gene expressed in a cell, no matter how seemingly unrelated to the metabolic process of interest, matters.
In its broadest strokes, this idea has been around since 1918, when R. A. Fisher, one of the founders of population genetics, proposed that complex traits could be produced by an infinite number of genes, each with infinitely small effects. But his was a statistical model that didn’t refer to any actual, specific biological conditions. It seems we are now in the era of being able to provide those specifics.
“This was the right paper at the right time,” according to Aravinda Chakravarti, a professor of neuroscience and physiology and director of the Center for Human Genetics and Genomics at New York University, who was a prepublication reviewer of the omnigenics paper in Cell. He and others had noticed many examples of how widely distributed genetic influences could be, he said, but they had not put them together into a coherent thesis. He disagrees with critics who say the paper simply stated the obvious. “The paper clarified many points of view. It didn’t matter if I had thought about it — I had not thought about it hard enough. And I had never heard anybody thinking about it hard enough, with any clarity, [such] that it formed any new hypothesis.”
In the paper, Pritchard and his colleagues proposed that, when geneticists seek what’s responsible for a disease or trait, it may be fruitful to think of the genes in a cell as a network. There may be some very highly connected genes at the center of a disease process, which they dub core genes. Peripheral genes, meanwhile, in aggregate help tip the scales one way or the other. The Cell paper authors suggest that understanding of the core genes will offer the best insights into the mechanism of a disease. Piecing together how peripheral genes contribute, on the other hand, will broaden understanding of why some people develop a disorder and others don’t.
Do Core Genes Exist?
Since the Cell paper’s publication a year ago, scientists’ discussion has circled around whether such a distinction is useful. David Goldstein, a geneticist at Columbia University, is not sure that disease processes must truly be routed through core genes, but he also says that the idea that not everything picked up by GWAS is central and specific to a given disease is important. In the early days of GWAS, he said, when a connection between a genetic locus and a disease was detected, people would take that as a sign that it should be the target of investigation for new treatments, even if the connection was weak.
“Those arguments are all fine — and were — unless something like what Jonathan is describing is going on,” he continued. “That’s a really big deal in terms of our interpretation of GWAS,” because weakly connected loci might then be less useful for getting at the pathology of a disease than people thought.
Yet that may well depend on the disease, according to Naomi Wray, a quantitative geneticist at the University of Queensland who pointed out when scientists first started doing GWAS analyses that they should expect to see many weak associations. A few conditions, she says, are primarily attributable to a small number of identifiable genes, or even just one — yet other genes may still flip the switch between one manifestation of illness and another. She cites the example of Huntington’s disease, a progressive neurological disorder caused by a specific defect in one gene. The age at which it strikes depends on how many repeats of a particular DNA sequence someone has in that gene. But even among patients with the same number of repeats, the age at which symptoms first appear varies, as does the severity with which the disability progresses. Scientists in the field are looking at other loci linked to Huntington’s disease to see how they might be causing the differences.
“These [loci] are by definition in peripheral genes. But they’re actually how the body is responding to this major insult of the core gene,” Wray said.
For most complex conditions and diseases, however, she thinks that the idea of a tiny coterie of identifiable core genes is a red herring because the effects might truly stem from disturbances at innumerable loci — and from the environment — working in concert. In a new paper out in Cell this week, Wray and her colleagues argue that the core gene idea amounts to an unwarranted assumption, and that researchers should simply let the experimental data about particular traits or conditions lead their thinking. (In their paper proposing omnigenics, Pritchard and his co-authors also asked whether the distinction between core and peripheral genes was useful and acknowledged that some diseases might not have them.)
Teasing out the detailed genetics of diseases will therefore continue to require studies on very large numbers of people. Unfortunately, in the past year, Pritchard has been told that some groups applying for funding to do GWAS have been turned down by reviewers citing the omnigenics paper. He feels this reflects a misinterpretation: Omnigenics “explains why GWAS is hard,” he said. “It doesn’t mean we shouldn’t do GWAS.”
Franke, who sees the paper as a provocatively phrased extension of earlier ideas, says that it has nevertheless shaped her thinking in the past year. “It made me rethink what I know about signal transduction — about how messages are relayed in cells — and how functions are fulfilled,” she said. The deeper you look at the workings of a cell, the more you realize that a single common protein may have quite different effects depending on what type of cell it is in: It may bear different messages, or block different processes, so much so that traits that might seem to be quite disconnected begin to change.
“It gave a lot of food for thought,” she said of the paper, “and I think that was the goal.”
This article was reprinted in Spanish at Investigacionyciencia.es. | https://www.quantamagazine.org/omnigenic-model-suggests-that-all-genes-affect-every-complex-trait-20180620/ |
Study Could Lead to New Treatments for Swallowing Disorders
The average human swallows 500-700 times a day. Imagine if each one of those swallows was a struggle.
For many who suffer from esophageal motility disorders like dysphagia that affect the way the muscles in the esophagus deliver food and liquids to the stomach, the act of swallowing can be difficult or even painful. It can turn something as simple as a sip of water into a violent fit of coughing. Brought on by conditions like gastroesophageal reflux disease, or GERD, degenerative diseases like Parkinson’s disease and even just old age, these disorders can lead to problems like dehydration, malnutrition, pneumonia and choking, and they affect the quality of life of approximately half a million Americans every year and as many as one in five individuals over the age of 50.
The causes of these conditions are not well understood by medical science, but a study published this month in the journal Cell Reports by a team of scientists from the University of Virginia’s College and Graduate School of Arts & Sciences and UVA’s School of Medicine identifies the unique genetic fingerprint of the nerve cells that govern the motor function of the esophagus. According to the scientists involved, this opens a new avenue of approach to the treatment of esophageal motility disorders that could lead to new therapies and new pharmaceuticals, offering hope to those who live with the debilitating effects of dysphagia and other disorders affecting the esophagus.
The study, led by doctoral candidate Tatiana Coverdell, began as an attempt to identify the neural pathways from the brain that control heart rate. The human body contains a complex array of neural pathways that connect the brain to each of the body’s organs, and much about how these pathways are organized and function is still not understood.
Coverdell and her co-authors, John Campbell, a molecular neuroscientist and biology professor with the College, and Stephen Abbott, a pharmacology professor with UVA’s School of Medicine, were focused on a region of the hindbrain in the lower part of the brainstem called the nucleus ambiguus. Previous studies have suggested that the nucleus ambiguus is connected by nerve projections to the heart, the larynx, the pharynx (which carries air, food and fluid down from the nose and mouth) and the esophagus, controlling how they function. In the process of looking for a specific pathway to the heart, they discovered a particular neuron subtype that controls axons, or nerve fibers, leading to the esophagus, which when activated, causes esophageal contractions.
“A lot of our projects begin with generating a parts list for a particular region of the brain,” Campbell said. “We want to know all the different cells that make up that region and what they do, and we profile their gene expression to identify them. To figure that out, we look at where each cell type sends their axons because that tells us what organ it controls. That’s how we ended up down this road.”
Current therapies for esophageal disorders involve stimulation of the vagal nerves, the main nerve system that controls body functions referred to as “resting and digesting” that can’t be consciously controlled However, that pathway also controls a wide range of other functions like cardio-respiratory and digestive functions, and these therapies can create a variety of unwanted effects.
“The vagus nerve is a superhighway of information between all of your visceral organs and your brain, and these motor neuron axons are found within that,” Campbell said. “But when you stimulate that, you’re activating everything: all these different pathways between the brain and the other connected organs. Having a more targeted approach to affect just esophageal motor function would allow these therapies to be more precise.”
Campbell and Coverdell realized that their findings could have practical clinical applications, and they reached out to neurophysiologist Stephen Abbott to help them characterize the function of the cells they discovered.
“The discovery has a lot of clinical significance, because it allows us to more specifically target the esophagus rather than targeting the entire region and having a lot of off target effects,” Coverdell said. “Future research can look at this and develop more targeted therapeutics for disorders of swallowing.”
“We now have a complete gene expression profile for these esophageal motor neurons,” Campbell added. “We know all the receptors that they express, all the neuropeptides and other signals that they express — any of these could be pharmacological targets in treating esophageal motility disorder. It also gives us access to the whole neural circuitry that controls swallowing. So, we can work backwards from these neurons that control the contractions of the esophagus, and that will give us a complete picture of how the swallowing program is represented in the brain.”
Abbott agrees that the discovery will have significant implications for the field of medicine, especially as the U.S. population continues to age.
“The ability to identify and study the nerve cells controlling esophageal function allows us study issues of esophageal motility disorders, and we hope this will promote improved treatments for these disorders that are prevalent in the elderly,” Abbott said. “We have all the possibilities in front of us now that we have this information.”
According to Deborah Roach, chair of the College’s Department of Biology, the findings are important evidence of the role cross-disciplinary research is having in advancing the boundaries of both science and medicine. | https://politics.as.dev.artsci.virginia.edu/news/study-could-lead-new-treatments-swallowing-disorders |
Jane Norgren was suffering terribly. A pernicious bacterium, Clostridium difficile, had taken up residence in her colon and refused to leave—no matter what antibiotics she took. As a result, she had bouts of uncontrollable diarrhea on and off for nearly four years. It was a miserable situation. She always had to be near a bathroom. “I felt like I was untouchable,” she says.
Then Norgren’s agony was over. That’s because she was one of the first people in Connecticut to benefit from fecal microbiota transplantation (FMT). Most of the existing bacteria in her digestive tract were killed by a heavy dose of antibiotics and replaced by healthy bacteria from a fecal donor (her daughter). “I have been fine ever since,” she says.
The procedure was performed by Paul Feuerstadt, MD, an assistant clinical professor of medicine at Yale School of Medicine. He says that more than 95 percent of the patients he has treated in this way through the Gastroenterology Center of Connecticut have recovered.
FMT is the first treatment that has emerged from a wave of research on the role of the microbiome in health. For years, most medical researchers treated microbes as a sideshow to the main event—the role of genetics in illness and medicine. Now, many are turning their attention to the impact on health of the trillions of bacteria, fungi, and other tiny organisms that colonize human digestive systems, lungs, nasal passages, skin, vaginas, and many other body parts and surfaces.
“For medical science, this is our next frontier. The better we understand our microbiome, the better we’ll be able to potentially avoid diseases or treat them more successfully,” says Feuerstadt.
Researchers believe that the microbiome plays a role in a wide variety of human diseases and conditions, including inflammatory bowel disease, irritable bowel syndrome, Crohn’s disease, lupus, chronic fatigue syndrome, fibromyalgia, cardiovascular disease, cancer, Parkinson’s, diabetes, cystic fibrosis, asthma, autism, and some forms of mental illness.
At Yale School of Medicine, researchers from more than a dozen academic departments are studying the microbiome. Some focus on understanding the fundamental mechanisms whereby bacteria interact with each other and with our bodies. Others develop tools for sequencing and editing genes in bacteria. Still others are focusing on the effects of the microbiome on specific diseases.
While these are early days, it’s already clear that Yale researchers are playing a key role in the microbiome revolution. For instance, Loren Laine, MD, professor of medicine and interim chief of the Section of Digestive Diseases, was instrumental in establishing the American Gastroenterological Association’s Fecal Microbiota Transplantation National Registry, which helps researchers assess short- and long-term outcomes associated with FMT.
This wave of microbiome research has been gathering strength for about a decade. However, the history of such research began in the 1860s when the French chemist Louis Pasteur showed that microbes are present all around us and in our bodies, and that some are responsible for diseases. Ever since, medical scientists have been studying the microbiota in the environment, animals, and humans so that physicians can better combat infection. With the emergence of antibiotics in the 1930s and 1940s, many believed that infectious diseases would swiftly be eliminated. Sadly, it didn’t happen. Now, resistance to antibiotics is a major concern.
Until a little over a decade ago, most research on the microbiome focused on which microbes played a role in causing infectious diseases, and how. Then came the Human Genome Project (HGP), launched in 1990 and completed in 2003, which provided the foundation of a better understanding of the genetic basis of such diseases as cancer. It was discovered that genes alone were often not sufficient to fully explain why complex diseases occur or how therapies work. So researchers began to look more deeply into the genetic makeup of microbes and their interactions with each other and with human cells—both negative and positive.
In an effort reminiscent of the HGP, the U.S. National Institutes of Health in 2007 launched the Human Microbiome Project. The goal was to fund research and collect data about the genomes and interactions of all the microorganisms in or on our bodies. Research funded by the project provides today’s scientists with a trove of data upon which to base their new inquiries.
One factor that makes this research so complex is that no two people have the same microbiome. While broad commonalities exist, each human has his or her own stew of bacteria and other organisms. In addition, while our personal microbiomes tend to be relatively stable, they change with the introduction of new organisms from the environment. So my microbes interact with my body differently than yours do with your body; and they interact differently today than they did six months ago.
And consider this: There are 150 times more genes in our microbiome than in our genome.
Yale faculty members say this complexity must be overcome. The ability to sequence and map human genomes has raised hopes that physicians will be able to understand an individual’s body so well that they can deliver truly personalized medicine—custom-designed therapies and treatments that will work especially well for that individual. Yet it’s becoming clear that understanding human genes and cells won’t be enough. “To truly understand the signals that regulate the expression of both healthy and diseased genes, you need to understand the microbiome. Precision medicine will be fairly imprecise without this,” says Gary Desir, MD, the Paul B. Beeson Professor of Medicine and chair of Internal Medicine.
Scientists admit they are still at the beginning stages of understanding the role of the microbiome in health. “We aren’t yet at the point where we can look at what is there in a microbiome and tell you much about what it can do. It’s hard to identify a diseased microbiome if you don’t know what a healthy one is,” says Andrew Goodman, PhD, an associate professor of microbial pathogenesis.
Goodman is determined to change that. His lab on Yale West Campus focuses on deepening understanding of how microbes in our digestive systems, the so-called gut microbiome, interact with each other and us. He and his colleagues have seen situations in which one microbe species uses a vitamin produced by another to survive in the gut; and others in which microbes fight each other to the death. On the microbe-to-host axis, the team is learning how gut microbes could affect how particular individuals respond to particular drugs.
So far, most of this experimentation involves mice—but not just any mice: germ-free mice. Only by using mice with no bacteria present can researchers introduce individual species or consortia of microbes and study the effects of their absence or presence. Although Goodman has a dedicated team of experts in these techniques, the med school also has a central germ-free mouse facility that all the scientists can access.
Another essential element of microbiome research is the tools that are used to read and analyze the genetic code within microbes and then to edit or even recode their DNA so that they interact with each other and our bodies in different ways.
Scientists employ the same tools for microbes that are used for human genetics research. The CRISPR/Cas9 technology has democratized gene editing by enabling scientists to cut and paste snippets of DNA code relatively easily. But researchers at Yale are some of the leaders in developing new approaches that don’t involve breaking the double strands of DNA and killing cells.
One effort is led by Farren Isaacs, PhD, an associate professor of molecular, cellular and developmental biology. His lab produces and uses high-throughput gene engineering technologies. With these tools, the researchers can rewrite the genomes of human cells and bacteria on a large scale—introducing scores of precise edits without creating double-stranded breaks in the strings of genes. His team developed a technique called eukaryotic multiplex genome engineering (eMAGE) and used it to alter the genetic information in yeast. The team members hope this technique will be used eventually to alter disease-causing genes in human cells and microbes.
Earlier this year, Isaacs’ lab pitched in on an effort aimed at engineering communities of gut microbes to help humans digest cellobiose, one of the most abundant disaccharides present in vegetables, which our digestive systems don’t have the means to metabolize. If efforts like this pay off, we will be able to harvest more energy from the food we eat.
Across the spectrum of Yale academic departments, researchers are using germ-free mice and gene sequencing and editing tools to advance research on the role of the microbiome in a wide variety of diseases and body functions. Laypeople tend to think that bacteria in the digestive system stays put, but in fact, microbes migrate via the blood to a host of other body organs and systems, including the liver, the lymph nodes, and even the brain. Much of the research focus at Yale is on infectious diseases, the immune system, and autoimmune diseases.
Martin Kriegel, MD, PhD, FW ’06, adjunct assistant professor of immunobiology, and of medicine (rheumatology), has been focusing on Enterococcus gallinarum, a bacterium that his team discovered can migrate from the gut into the lymph nodes, liver, and spleen in predisposed hosts—thereby producing an autoimmune response. Kriegel’s research team found that they could suppress the response caused by E. gallinarum in mice with an antibiotic or vaccine. Research like this could help pharmaceutical companies produce antibiotics and vaccines that are particularly good at attacking specific bacteria.
“This may become personalized medicine,” says Kriegel. “We need to look at the host predisposition such as the human genes, and at the microbiome in the patient’s gut and tissues. Based on these analyses, we should be able to decide on the best treatment for a particular patient in the future.”
Other labs are focusing on prevention in addition to cures. Li Wen, MD, PhD, FW ’97, associate professor of medicine (endocrinology), is looking into the causes of type 1 and type 2 diabetes. Her team’s research has shown that certain types of gut bacteria cause intestinal inflammation, which in turn promotes the development of type 1 diabetes. At the same time, the research shows that obesity and other factors associated with type 2 diabetes are more prevalent in patients with low diversity in the gut microbiome. Down the road, she believes, people with a genetic predisposition to diabetes may be able to fend off the disease by consuming customized probiotic cocktails.
“I think prevention is most important,” she says. “It’s the most effective and economical way to deal with diabetes.”
While most of the research at Yale concerns the gut microbiome, a handful of faculty members are focusing on other body parts or systems. For instance, Barbara Kazmierczak, MD, PhD, a professor of medicine and of microbial pathogenesis, specializes in the lung microbiome—specifically the effects of bacteria on cystic fibrosis.
Much of the research being done at Yale, while promising, is not expected to deliver new treatments or drugs for many years. But some projects might be closer to the marketplace. For instance, two professors—Noah Palm, PhD, assistant professor of immunobiology, and Richard Flavell, PhD, Sterling Professor of Immunobiology—are co-founders of a startup company Artizan Biosciences, aimed at identifying harmful bacteria in the gut and targeting them for destruction. One of their first targets: inflammatory bowel disease.
“We found that when we take a healthy microbiota and add one microbe to the mixture, the mouse gets sick. On the flip side, if you’re able to block the pathologic effects of that bug or eliminate it entirely, the mouse does not get sick,” says Palm. The next step is figuring out how to attack the bad bug—with targeted antibiotics, small molecules, or perhaps even phages.
Elsewhere around the country, a number of medical schools are establishing major research initiatives focused on the microbiome. Among them are Harvard, University of Chicago, Stanford, University of Pittsburgh, University of Michigan, and New York University.
Several Yale researchers say they’d like to see a larger and more coordinated effort to foster and support microbiome research. Such an initiative might make it easier for them to secure funding and develop more multidisciplinary collaborations. Moreover, there would likely be benefits from expanding the variety of research done at Yale. Right now, most of the focus is on experimental science and the gut microbiome. They’d like to see more done in data analysis and in other microbiomes. | https://medicine.yale.edu/news/yale-medicine-magazine/a-sophisticated-system/ |
There are over 450 genetic diseases affecting the bone, and the cause of at least 75 of them is still unknown. Identifying the genetic basis of bone disease has previously been helpful not only to help the families affected by these genetic diseases, but also to develop therapies for more common conditions such as osteoporosis. In recent years, we have been able to identify the genetic basis of over four genetic diseases using new DNA sequencing technologies. Identifying the gene is a first step in the process of understanding the diseases and developing targeted therapies.
In our research program, we propose to study other rare bone diseases to identifying the gene responsible, to use cell and mouse studies to understand the way the genes work, and finally to attempt to develop new therapies targeting the pathway affected by these genes. Notably, we have recently identified mutations in fibronectin, a key protein of conjunctive tissues, as a cause of a skeletal dysplasia causing scoliosis and abnormal growth plates. Moreover, we found rare fibronectin mutations in individuals with idiopathic scoliosis, a condition affecting up to 3% of the population.
Our research will not only help the patients and families affected by these diseases, but will advance the field of bone development and maintenance and cell signaling research, and will allow us to test emerging therapies for bone diseases. | http://www.frqs.gouv.qc.ca/en/la-recherche/la-recherche-financee-par-le-frqs/projets-de-recherche/projet/maladies-genetiques-de-l-os-nouveaux-genes-et-nouveaux-traitements-b970torx1540835195123 |
Investigators with The Cancer Genome Atlas (TCGA) Research Network have identified new potential therapeutic targets for a major form of bladder cancer, including important genes and pathways that are disrupted in the disease. They also discovered that, at the molecular level, some subtypes of bladder cancer — also known as urothelial carcinoma — resemble subtypes of breast, head and neck and lung cancers, suggesting similar routes of development.
The researchers’ findings provide important insights into the mechanisms underlying bladder cancer, which is estimated to cause more than 15,000 deaths in the US in 2014. TCGA is a collaboration jointly supported and managed by the National Cancer Institute (NCI) and the National Human Genome Research Institute (NHGRI), both parts of the National Institutes of Health.
“TCGA Research Network scientists continue to unravel the genomic intricacies of many common and often intractable cancers, and these findings are defining new research directions and accelerating the development of new cancer therapies,” said NIH Director Francis Collins, MD, PhD.
In this study, published online 29 January 2014 in Nature, investigators examined bladder cancer that invades the muscle of the bladder, the deadliest form of the disease. The current standard treatments for muscle-invasive bladder cancer include surgery and radiation combined with chemotherapy. There are no recognised second-line therapies — second choices for treatments when the initial therapy does not work — and no approved targeted agents for this type of bladder cancer. Approximately 72,000 new cases of bladder cancer will be diagnosed in the US in 2014.
“This project has dramatically improved our understanding of the molecular basis of bladder cancers and their relationship to other cancer types,” said lead author John Weinstein, MD, PhD, professor and chair of the Department of Bioinformatics and Computational Biology at The University of Texas MD Anderson Cancer Center in Houston. “In the long run, the potential molecular targets identified may help us to personalise therapy based on the characteristics of each patient’s tumour.”
“The real excitement about this project is that we now have a menu of treatment and research directions to pursue,” said Seth Lerner, MD, professor and chair in urologic oncology at Baylor College of Medicine in Houston, and one of the senior authors of the paper. “The field is poised to use this information to make new advances toward therapies for a very-difficult-to-treat form of bladder cancer.”
The research team analysed DNA, RNA and protein data generated from the study of 131 muscle-invasive bladder cancer from patients who had not yet been treated with chemotherapy. The scientists found recurrent mutations in 32 genes, including nine that were not previously known to be significantly mutated. They discovered mutations in the TP53 gene in nearly half of the tumour samples, and mutations and other aberrations in the RTK/RAS pathway (which is commonly affected in cancers) in 44 percent of tumors. TP53 makes the p53 tumour suppressor protein, which helps regulate cell division. RTK/RAS is involved in regulating cell growth and development.
The investigators also showed that genes that regulate chromatin — a combination of DNA and protein within a cell’s nucleus that determines how genes are expressed — were more frequently mutated in bladder cancer than in any other common cancer studied to date. These findings suggest the possibility of developing therapies to target alterations in chromatin remodeling.
Overall, the researchers identified potential drug targets in 69 percent of the tumours evaluated. They found frequent mutations in the ERBB2, or HER2, gene. The researchers also identified recurring mutations as well as fusions involving other genes such as FGFR3 and in the PI3-kinase/AKT/mTOR pathway, which help control cell division and growth and for which targeted drugs already exist.
Because the HER2 gene and its encoded protein, HER2 — which affects cell growth and development — are implicated in a significant portion of breast cancers, scientists would like to find out if new agents under development against breast cancer can also be effective in treating subsets of bladder cancer patients.
“We’ve organised our medical care around the affected organ system,” Dr Lerner said. “We have thought of each of these cancers as having its own characteristics unique to the affected organ. Increasingly, we are finding that cancers cross those lines at the molecular level, where some individual cancers affecting different organs look very similar. As targeted drug agents go through preclinical and clinical development, we hope that rather than treating 10% of breast cancers or 5% of bladder cancers, it eventually will make sense to treat multiple cancer types where the target is expressed.” The same theme runs through TCGA’s Pan-Cancer project, which is aimed at identifying genomic similarities across cancer types, with the goal of gaining a more global understanding of cancer behavior and development.
“It is increasingly evident that there are genomic commonalities among cancers that we can take advantage of in the future,” said NHGRI Director Eric D. Green, MD, PhD. “TCGA is providing us with a repertoire of possibilities for developing new cancer therapeutics.”
The scientists also uncovered a potential viral connection to bladder cancer. It is known that animal papilloma viruses can cause bladder cancer. In a small number of cases, DNA from viruses — notably, from HPV16, a form of the virus responsible for cervical cancer — was found in bladder tumors. This suggests that viral infection can contribute to bladder cancer development.
“The definitive molecular portrait of bladder cancer by the TCGA Network has uncovered a promising array of potential therapeutic targets that provides a blueprint for investigations into the activity of existing and novel therapeutic agents in this cancer,” said Louis Staudt, MD, PhD, director, NCI Center for Cancer Genomics. | http://neitec.com/noticias/tcga-bladder-cancer-study-reveals-potential-drug-targets-similarities-to-several-cancers/ |
Scientists have sequenced the DNA of the hookworm, a parasite
that feeds on blood in the human digestive tract.
The study identified genes responsible for the worm invading the
body, as well as controlling its feeding and growth, and named certain proteins
as potential drug and vaccine targets.
'We now have a more complete picture of just how this worm invades the
body, begins feeding on the blood, and successfully evades the host immune
defences', said Dr Makedonka
Mitreva of the Washington University in St Louis, USA, who led
the study.
Hookworm infections affect ten percent of the world's population. Although
they have been kept in check with drugs, there are many areas where the
parasite has become resistant, and infections are spreading. They
rarely lead to death, but are very dangerous to pregnant women, causing
anaemia and malnutrition, and they delay the development of affected children.
The scientists identified the proteins that allow hookworm larvae to
enter the human body through skin, as well as those needed by the adult
hookworm to feed on blood in the gut.
The team also found which proteins help the hookworm evade the body's
defence mechanisms: SCP/TAP proteins. These are thought to be crucial to
hookworm survival and therefore make ideal drug targets. Because they are
involved in suppressing the host's immune system, they are also being
investigated as potential treatments for auto-immune and inflammatory diseases.
'It is our hope that the new research can be used as a springboard not
just to control hookworm infections but to identify anti-inflammatory molecules
that have the potential to advance new therapies for autoimmune and allergic
diseases', added Dr Mitreva.
The scientists also screened a class of drugs known as protein kinase
inhibitors to find out their effects on hookworms' cell function. They found
that more than 200 of these drugs could
be effective in fighting hookworms, with the most promising candidate currently
used in treating leukemia.
The study was published in Nature Genetics.
Leave a Reply
You must be logged in to post a comment. | https://www.progress.org.uk/decoded-hookworm-dna-helps-highlight-drug-targets/ |
Scientists have identified five genetic subtypes that could help categorise patients of diffuse large B cell lymphoma (DLBCL) to customise their treatment options, it has been announced.
Genomic analysis by scientists at Dana-Farber Cancer Institute and the Broad Institute of MIT and Harvard, USA, say the genetic subtypes can help to identify likely therapeutic targets.
About 60% of DLBCL patients can be treated successfully with a combination of four chemotherapies plus a targeted drug that inhibits a B cell surface protein.
However treatment options for the “very substantial fraction” of patients who develop recurrent disease are less successful.
Although existing clinical tests can predict which patients with DLBCL can be treated effectively with standard treatments, they do not help inform the improvement of treatment for other patients.
The study aimed to integrate data on three types of genetic alterations that can drive tumours - mutations to genes, changes in gene copy numbers and chromosomal rearrangements - and define previously unappreciated disease substructure.
Dr Shipp said: "Specific genes that were perturbed by mutations could also be altered by changes in gene copy numbers or by chromosomal rearrangements, underscoring the importance of evaluating all three types of genetic alterations.
“Most importantly, we saw that there were five discrete types of DLBCL that were distinguished one from another on the basis of the specific types of genetic alterations that occurred in combination."
They examined the tumour subtypes by RNA data associated with cell of origin and discovered that each of the two major cell-of-origin subtypes could be split into separate categories with distinct genetic signatures.
An additional subtype, which is defined by TP53 gene alterations and associated genomic instability, was unrelated to the cell of origin.
The researchers then found clear links between given genetic subtypes and how patients responded to standard treatment.
"We feel this research opens the door to a whole series of additional investigations to understand how the combinations of these genetic alterations work together, and then to use that information to benefit patients with targeted therapies," says Dr Shipp. | https://b-s-h.org.uk/about-us/news/genomic-analysis-could-help-patients-with-dlbcl/ |
Discussion on the Molecular Basis of Inheritance Class 12 NCERT Solutions:
In this chapter for Class 12, we will learn about genes and genetics in detail. Our genes determine much of our biological constitution and determine our chances of developing certain diseases and disorders. It is important to learn about them so as to better understand the human body and by extension, ourselves.
This chapter navigates many subtopics and questions relating to genes and how important they play in heredity and inheritance. The phenomenon of heredity is central to biology, and inheritance is how those genes are passed from generation to generation. We understand how genes influence our traits and characteristics by learning about them.
We’ll also take a closer look at the RNA world or Replication. Transcription and the genetic code are other topics that are also very important to understand and will be explored in this chapter.
Finally, this chapter will also talk about the Human Genome Project. This massive project was undertaken to identify, map, and sequence all of the genes that comprise the human body. We also examine DNA fingerprinting and its importance in forensics and how it aids criminal investigations.
In the 19th century, biologists discovered that the material responsible for heredity was passed from the parent to the child. This theory was based on the principle that organisms inherit their traits from their parents.
In 1869, German scientist Friedrich Miescher discovered a substance in the cells of animals and humans that contained genetic information. It was called ribonucleic acid (RNA) as it was found in the cellular fluid (the fluid containing water and proteins) called nuclein. Miescher was not aware of what it was or what it did, but he knew for sure that it was found only in the cells of living things.
The story of the discovery of genes is a classic example of scientific discovery. In 1869, a German scientist called Julius Wagner-Jauregg treated an Austrian Archduke with a mysterious illness and discovered that he could cure syphilis by injecting his patients with the blood of animals.
The following year, another German scientist, Friedrich Miescher, discovered the material that Wagner-Jauregg had used in his syphilis treatment – a material that is now known as DNA. It took 70 years before scientists discovered how DNA was copied and passed onto offspring.
The search for genetic material that makes us who we are has fascinated scientists for ages. In the 19th century, Charles Darwin also acknowledged the fundamental importance of genetics and the inheritance of acquired characters. The 19th-century researchers also found that cell materials such as chlorophyll and proteins were made of tiny building blocks called amines, purines, and pyrimidines.
The search for the material responsible for the transmission of genetic information is one of the greatest scientific adventures of the 20th century. | https://msvgo.com/cbse/ncert-solutions-class-12-biology-molecular-basis-of-inheritance |
Twenty-five years ago, scientists from the National Cancer Institute uncovered the VHL gene, a gene whose mutation can lead to the development of kidney tumors.
The discovery, the result of a decade-long partnership between CCR scientists and families affected by the disease, paved the way for new targeted therapies that have improved the prognosis for patients with advanced kidney cancers.
“We’re seeing increased progression-free survival for kidney cancer patients,” says W. Marston Linehan, M.D., Chief of CCR’s Urologic Oncology Branch, who co-discovered VHL with his NCI colleagues in 1993. “We’re thrilled about the progress that has been made.”
When Linehan began his career as a surgeon in the 1970s, patients with advanced kidney cancers were treated with a standard regimen of surgery and chemotherapy, but the drugs did little to stop the cancer’s spread. Today, kidney cancer is recognized as not a single disease but made up of a number of diseases, each driven by distinct genetic features that shape its clinical course and response to therapy, and targeted therapies are available for the most common form, renal clear cell carcinoma.
This more nuanced view builds on decades of research, much of which can be traced to a bold decision by Linehan and his NCI colleague Berton Zbar, M.D., to begin searching for a kidney cancer gene in the early 1980s. At the time, few genes had been linked to cancer of any type. Genetic analyses were far more laborious and costly than they are today, and Linehan says many researchers were skeptical that their efforts would turn up anything useful.
But patients desperately needed new treatments, and Linehan and Zbar knew that in order to develop them, scientists would need a better understanding of the disease. Their early studies of tumors from patients did hint at a genetic mutation associated with kidney cancer—but when the two scientists calculated that it would take them more than half a century to pinpoint the gene itself with the standard techniques available at the time, they agreed they needed a new strategy.
Redirecting their hunt, Linehan and Zbar turned to patients who were at risk to develop tumors in several organs, including the kidneys, associated with an inherited syndrome called von Hippel-Lindau (VHL). Although most kidney cancers do not run in families, the team hoped that patients with VHL might share a genetic abnormality that would help explain how kidney cancers arise—and ultimately, how to treat and/or prevent them.
The hereditary renal cancer program that Linehan and Zbar established at NCI brought hundreds of patients and their families to the NIH Clinical Center in Bethesda, Maryland. With the support of colleagues from across the NIH, the patients received expert clinical care for their complex disease, while they and their family members became part of a massive search for clues that might point the way to an effective treatment.
Working with another NCI colleague, Michael Lerman, M.D., Ph.D., and an international team including Eamonn Maher, Linehan and Zbar began comparing their patients’ DNA with their unaffected family members’ DNA. Answers did not come quickly, but the scientists persisted. “We saw these patients every week and I told them, ‘We are going to continue going to work on this. We’re not working on anything else,’” Linehan recalls.
As more and more families affected by VHL volunteered for the study, the researchers were able to close in on their target. In 1993, after analyzing the DNA of a large number of individuals, the team found what it had been looking for: a gene that was altered in patients with VHL but intact in family members without the disorder.
The researchers soon found that mutations in the VHL gene are not only responsible for the inherited syndrome, they are also found in the tumors of most patients who have developed clear cell kidney cancers without a family history of the disease. In contrast, the gene was rarely mutated in the tumors of patients with other types of kidney cancer.
This seminal discovery paved the way for studies that tied VHL to a key oxygen-sensing pathway and revealed how a failure in the pathway promotes tumor growth. Researchers around the world used that knowledge to explore ways to block tumor growth.
As a result, the U.S. Food and Drug Administration has approved nine drugs that target the VHL pathway for the treatment of patients with advanced kidney cancer. Linehan and his colleagues and others have also gone on to link sixteen more genes to various forms of kidney cancer, offering insight into those diseases and hope for targeted treatments.
Twenty-five years after the VHL gene’s discovery, Linehan is pleased to see drugs that target the VHL pathway having a real impact. He’s quick to note, however, that his work is far from finished.
“We have a long way to go,” he says. “There are remarkable responses, but we need to do a whole lot better to be able to cure the majority of patients with this disease.” Linehan continues to see patients weekly, while he and his team investigate kidney cancer’s causes and test new treatment strategies in clinical trials. “When I see patients in the clinic who have these cancers that have spread, I think, I don’t care what it takes. We owe it to these people.”
Source: cancer.gov
|
|
Corporate Address:
507 East Jefferson Street
Brooksville, FL 34601
|
|
Mailing Address: | https://www.capitalpublishing.com/fhm18---nephrology---25-years-since-discovery-of-first-gene-linked-to-kidney-cancer.html |
Staying focused on your task, priorities and mission is vital to your success. But it doesn’t come easily when you’re overwhelmed with daily distractions, a long to-do list, and multiple projects that demand your attention.
Here are seven strategies to stay super focused:
Say “no, thank you.”
Get clear on what you really want to achieve. Choose deliberately. Prune your to-do list. Declutter your schedule. Shed meaningless tasks. Forget about goals that no longer serve you. Switch gears or change the channel. Drop, delegate or barter assignments that don’t cater to your core strengths and true purpose.
Having too much on your plate weighs you down and creates leftover mess. Tackle three essential tasks to complete on a given day or three major goals to accomplish in a week. When something isn’t right for you, say “no, thank you.” This will give you more time and space to commit to things that matter.
Mentally rehearse the task.
Visualize the ideal process, instead of obsessing over desired results. Picture yourself performing the task brilliantly and with ease. See yourself overcoming obstacles and maneuvering around hurdles. How will you feel when the deal is done? Elated? Excited? Evolved? Use these positive vibes to inspire you, pull you in, and take focused action.
Keep your energy up during breaks.
When you’re in a state of flow, it’s invigorating to stay on task. But forcing yourself to soldier on, when you’re drained, impairs your creativity and productivity. Regular breaks, for as little as 5 to 15 minutes, can do wonders. Take a walk, chat with a friend, grab a healthy snack, or get some fresh air.
Without consistent renewal and rejuvenation, it’s hard to stay alert and maintain focus. Set a regular bedtime routine and get a good night’s rest to avoid zoning out. Step away from the task when your interest in it begins to plummet. Go back to it when you refuel your energy.
Stop multitasking.
Doing multiple things at once or switching rapidly between tasks is the opposite of focus. So pick one important task and fully engage with it. Before you move on to the next thing, pause intentionally, take a deep breath, and bask in gratitude for the thing you just did.
If you tend to get bored doing one task, you could set a timer to perform it in short bursts of 15 to 25 minutes. Or you could batch together similar tasks that require the same resources. For example, run your errands, file paperwork, reply to emails, and return telephone calls in designated time blocks.
Boost your willpower.
Focus requires self-control and the ability to resist short-term temptations for long-term gains. Breath-work, yoga and meditation are among the most effective ways to boost your willpower. These mindful practices help you take deliberate action, regardless of your shifting thoughts and volatile emotions.
You don’t have to follow through on each thought or act on every emotion that arises. You can simply sit with it without getting carried away by it. Come back to your breath. Do a body scan. Return to the present moment. Honing your willpower helps you stay focused rather than get distracted by mental chatter and unwanted feelings.
Make it automatic.
Develop regular habits and simple routines to make a task more automatic. Lay out the tools you will need to complete it. Pick a specific time to perform it. Set up reminders to work on it and reward yourself when you do.
When an action step is part of your routine, you are bound to resist it less. This helps you preserve your energy and attention span for more difficult tasks that aren’t easily automated.
Create a supportive environment.
Constant interruptions and unnecessary distractions dilute your focus. Arrange your work space to discourage unscheduled visits. Plug in your earphones and listen to soothing music or white noise. Move to a quieter place if you can’t block out office banter. Schedule time blocks to focus on the task at hand.
If you want to complete a challenging project, turn off your phone, mobile devices and email and IM notifications. Disconnect from the Internet. Optimize your environment to keep your focus, find flow in your work, and experience real progress.
* * *
Use one, all or a combination of these strategies to overcome internal busyness and reduce external distractions. Review what works for you. Make use of your preferred techniques to stay super focused and get meaningful things done. | https://www.lifehack.org/articles/productivity/7-strategies-stay-super-focused.html |
It's the time of year when even the most seasoned professionals can experience what the World Health Organization calls "occupational burnout." Let's face it, everyone is overwhelmed with the holidays, pandemic lockdowns, and end of years tasks. When you add a demanding job with an overabundant workload, this only fuels our negative, and sometimes even cynical, job-related feelings.
If you're experiencing mental exhaustion resulting from work's excessive demands as well as physical symptoms such as headaches and sleeplessness, quickness to anger, and foggy thinking, it's time to take a break. Your brain and body can only handle feeling overworked and overwhelmed for so long.
Here are 3 things you can do to start your road to burnout recovery:
- Understand Why You're Burnt Out - It may feel counter-intuitive to focus on this negativity boiling up inside of you, but if we don't address the true problem, nothing will ever change. Make a list or talk to someone you can trust to identify when and where in particular you are feeling resentment or stress.
- Prioritize & Delegate - Focus on what has to be done first and break down milestones for your big projects. Decide which tasks are less important and set them aside for another time to address and conquer. You can’t do everything yourself either, so if you're taking on more tasks than you can handle, pass them off to someone you trust.
- Take Care of Yourself - The key to burnout recovery is taking charge of your physical and emotional health. Are you eating healthy, drinking plenty of water, and sleeping well? What about regular exercise? Aim to exercise for 30 minutes or more per day or break that up into short, 10-minute bursts of activity. Just 10-minutes spent walking, dancing, or even playing sports can improve your mood for two hours. Taking a vacation or even just an extra day off is a great way to recharge so you can focus on healing and starting your new healthy routines.If you're unable to take time off, schedule short breaks throughout your workday and take just 10 minutes to step away. If you can spare a bit more time, find a dark, quiet area where you might be able to take a nap or at least relax.
3 Simple Productivity Tips
Once you're ready to get back to work after a nice break, it's time to be productive so that instead of feeling overwhelmed, we're left feeling accomplished. Here are 3 productivity tips to help you stave off that burnt out feeling and be more successful in your professional life:
- Major Management - Time management skills are crucial to getting everything done. Thanks to technology, there are a plethora of applications and software designed to improve one's productivity. Here are just 5 of my favorite productivity tools for a more organized, consistent, and efficient work routine:
- Teams - This chat-based workspace is a digital hub that combines instant messaging, voice, video, calling, and file-sharing, making communication and collaboration a breeze.
- Sticky Notes - Organize and keep track of your daily to-do list, jot down notes and reminders, or even your mantras (but never passwords!!!).
- OneNote - This digital notebook automatically backs up to Microsoft’s Office 365 cloud, can be shared with colleagues for real-time collaboration and allows you to capture just about anything. Type notes, record audio, create a quick sketch, add pictures, videos, and any other document.
- Password Manager - How many logins and passwords do you have? If you have more than 3 online accounts, you'll want a password manager to generate, store, and remember all of your passwords, making them easily accessible only to you. They can even be configured to log into websites with the appropriate information automatically, prompt us for timely password updates/resets, and even generate passwords for us on the fly, should we so desire. Here are a few to password manager options to check out:
LastPass
KeePass
SplashID
1Password
- Outlook - Outlook streamlines email, calendar, contacts, tasks, and more. Organize your email and contacts with Outlook rules, and schedule "out-of-office" or “no meeting” time blocks to limit distractions and enforce your routine to keep yourself accountable.
- Rinse & Repeat - Every decision you make during the day uses brainpower. To save energy for important decisions, we routinize small ones, from your daily exercise routine to what you wear. When we create a ritual around our new positive habits, the accomplished repetition feels like a reward to the mind and body.
- Don't Distract - While technology is here to make our lives easier and better, it also distracts us from our ability to work effectively. Turning off notifications and enabling do not disturb features on your personal device is a great start. Although multitasking feels like you're getting more work done, research has shown that multitasking can actually hinder productivity due to the accumulated time people waste switching between tasks. Plan your day so that you have blocked off times to work on particular projects and other times set aside to accomplish the small tasks that pop up and would otherwise interrupt you.
I hope these tips leave you feeling better and more productive than ever before. There is never enough time in the day, so let's make the most of it. If you are wasting time troubleshooting your computer, your phones, or your printer, then technology is compromising your productivity. Let technology work for you, schedule a free IT consultation or give myself and the technology experts at CTTS a call now: (512) 388-5559. | https://www.cttsonline.com/2020/12/08/tech-tip-203-troubleshoot-your-time/ |
As an inventive technique of learning, spaced repetition or spaced learning has become a trendy topic over the last few decades. The method includes a series of short, intense learning sessions with increased student involvement, separated by short breaks during which students do a completely different activity. That said; when and where can this technique be applied? This post will explore a few key instances, on when to use spaced repetition to enhance learning.
Spaced learning can be beneficial to anyone who wishes to learn new information, but it is particularly helpful to students and those doing company training. It is applied in instances where a learner must study a huge volume of new information and keep it indefinitely in memory.
Memorizing all the information presented in a classroom setting can be quite overwhelming. Students usually try to cram every available material at the last minute. This is inefficient and exhausting, to say the least. The study technique of spaced learning can help students expedite the learning process considerably – and it can complement nearly any other study technique. This is a great example of when to use spaced repetition.
Put simply; spaced learning uses increasing lengths of time between review sessions of previously studied information. When you wrongly answer a question, the next review session for that said question or topic is scheduled much earlier. On the other hand, for questions, you answer correctly, the time for their review is pushed further. This way, you don’t waste time studying information you already know, but rather things you are just about to forget.
When to use spaced repetition? Whenever possible.
Organizational knowledge is usually a competitive advantage that can never be overlooked. A good number of employees are faced with a never-ending range of information they must be acquainted with to be successful in their profession, and training is undoubtedly part of the solution.
Spaced repetition can be a remarkable and economical approach that organizations can use to help employees retain knowledge. When organizations, via seminars and workshops, deliver learning in short breaks over time, the loss in productivity is smaller. As opposed to sending their employees back to schools, organizations can incorporate spaced learning on the job. This creates greater impact, while still keeping costs under control.
Bottom line, spaced learning can be applied in a myriad of subjects or occasions. Most importantly, this technique is particularly helpful in the era of mobile learning, where learners can use mobile gadgets to access short bursts of learning materials – even while on the go. Actually, you will find that most things involving learning or studying can be coupled with the use of spaced learning apps, and the level of your use is only as limited as your creativity.
If you’d like to learn more about EdApp’s Spaced Repetition feature, called Brain Boost, get in touch at [email protected]. You can also try EdApp’s Mobile LMS for free by signing up here or in the box below. | https://www.edapp.com/blog/when-to-use-spaced-repetition/ |
Did Red Crucifix In The Sky Herald A Cosmic Death And Birth?
A nearby short duration gamma-ray burst may be the cause of an intense blast of high-energy radiation that hit the Earth in the 8th century, according to new research led by astronomers Valeri Hambaryan and Ralph Neuhӓuser. The two scientists, based at the Astrophysics Institute of the University of Jena in Germany, publish their results in the journal Monthly Notices of the Royal Astronomical Society.
An artist’s impression of the merger of two neutron stars. Short duration gamma-ray bursts are thought to be caused by the merger of some combination of white dwarfs, neutron stars or black holes. Theory suggests that they are short lived as there is little dust and gas to fuel an ‘afterglow’.
Credit: Part of an image created by NASA / Dana Berry.
In 2012 scientist Fusa Miyake announced the detection of high levels of the isotope Carbon-14 and Beryllium-10 in tree rings formed in 775 CE, suggesting that a burst of radiation struck the Earth in the year 774 or 775. Carbon-14 and Beryllium-10 form when radiation from space collides with nitrogen atoms, which then decay to these heavier forms of carbon and beryllium. The earlier research ruled out the nearby explosion of a massive star (a supernova) as nothing was recorded in observations at the time and no remnant has been found.
Prof. Miyake also considered whether a solar flare could have been responsible, but these are not powerful enough to cause the observed excess of carbon-14. Large flares are likely to be accompanied by ejections of material from the Sun’s corona, leading to vivid displays of the northern and southern lights (aurorae), but again no historical records suggest these took place.
Ground based Observation by W.M. Keck Observatory: The location of the gamma-ray burst as determined by the Swift X-Ray Telescope (XRT) is shown by the blue circle on this infra-red image of the sky from the W.M. Keck telescope. The burst went off in the outskirts of this huge galaxy 2.6 billion lights years away. The location outside, but near, the galaxy fits perfectly with the theory that short bursts are due to black hole formation when orbiting neutron stars collide. The gamma-ray source has not yet been securely identified and scientists are most interested in the 4 objects within Swift’s X-ray telescopes error circle.
Credit: Cenko, Soifer, Bian, Desai, Kulkarni (Caltech), Berger (Carnegie), Dey and Jannuzi (NOAO)
Following this announcement, researchers pointed to an entry in the Anglo-Saxon Chronicle that describes a ‘red crucifix’ seen after sunset and suggested this might be a supernova. But this dates from 776, too late to account for the carbon-14 data and still does not explain why no remnant has been detected.
Drs. Hambaryan and Neuhӓuser have another explanation, consistent with both the carbon-14 measurements and the absence of any recorded events in the sky. They suggest that two compact stellar remnants, i.e. black holes, neutron stars or white dwarfs, collided and merged together. When this happens, some energy is released in the form of gamma rays, the most energetic part of the electromagnetic spectrum that includes visible light.
In these mergers, the burst of gamma rays is intense but short, typically lasting less than two seconds. These events are seen in other galaxies many times each year but, in contrast to long duration bursts, without any corresponding visible light. If this is the explanation for the 774 / 775 radiation burst, then the merging stars could not be closer than about 3000 light years, or it would have led to the extinction of some terrestrial life. Based on the carbon-14 measurements, Hambaryan and Neuhӓuser believe the gamma ray burst originated in a system between 3000 and 12000 light years from the Sun.
If they are right, then this would explain why no records exist of a supernova or auroral display. Other work suggests that some visible light is emitted during short gamma-ray bursts that could be seen in a relatively nearby event. This might only be seen for a few days and be easily missed, but nonetheless it may be worthwhile for historians to look again through contemporary texts.
Sequence shows the death of two neutrons stars, and a gamma ray burst, preceding the birth of a black hole
Credit: Goddard Space Flight Center
Astronomers could also look for the merged object, a 1200 year old black hole or neutron star 3000-12000 light years from the Sun but without the characteristic gas and dust of a supernova remnant.
Dr Neuhӓuser comments: “If the gamma ray burst had been much closer to the Earth it would have caused significant harm to the biosphere. But even thousands of light years away, a similar event today could cause havoc with the sensitive electronic systems that advanced societies have come to depend on. The challenge now is to establish how rare such Carbon-14 spikes are i.e. how often such radiation bursts hit the Earth. In the last 3000 years, the maximum age of trees alive today, only one such event appears to have taken place.”
Contacts and sources:
Dr Ralph Neuhӓuser Institute of Astrophysics, University of Jena
Dr Robert Massey. Royal Astronomical Society
Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.
"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.
Please Help Support BeforeitsNews by trying our Natural Health Products below!
Order by Phone at 888-809-8385 or online at https://mitocopper.com M - F 9am to 5pm EST
Order by Phone at 888-388-7003 or online at https://www.herbanomic.com M - F 9am to 5pm EST
Order by Phone at 888-388-7003 or online at https://www.herbanomics.com M - F 9am to 5pm EST
Humic & Fulvic Trace Minerals Complex - Nature's most important supplement! Vivid Dreams again!
HNEX HydroNano EXtracellular Water - Improve immune system health and reduce inflammation
Ultimate Clinical Potency Curcumin - Natural pain relief, reduce inflammation and so much more.
MitoCopper -
Bioavailable Copper
destroys
pathogens and gives you more
energy. (See Blood Video)
Oxy Powder - Natural Colon Cleanser! Cleans out toxic buildup with oxygen!
Nascent Iodine - Promotes detoxification, mental focus and thyroid health.
Smart Meter Cover - Reduces Smart Meter radiation by 96%! (See Video)
Immusist Beverage Concentrate - Proprietary blend, formulated to reduce inflammation while hydrating and oxygenating the cells. | https://beforeitsnews.com/space/2013/01/did-red-crucifix-in-the-sky-herald-a-cosmic-death-and-birth-2453412.html |
According to published reports, more companies are using spyware to monitor employee activity while working from home.
By definition, this means that bosses are more focused on the employee’s activity, rather than the outcomes they generate. They’re measuring hours worked, calls answered, cases closed, tickets resolved. And now, they’re measuring the amount of time employees are spending in an application, on a phone call, or on a specific task.
Certainly, these are measures of work done. But do they measure work done well? Do they measure customer satisfaction? Efficient use of time?
More important questions are: Do these measurements encourage the desired behavior from employees? Do they support the company’s objectives?
Punishment or reward?
Let’s imagine a company that claims to provide world-class customer service, provided by a service team responsible for responding to customer queries via phone, email, and chat. There are a few ways one could use measurement to gauge the team’s effectiveness:
- Scenario 1 — Measure output, such as the number of calls each rep handles per hour, the amount of time spent on each call, the number of other contacts (e.g. email and chat queries) the rep handles, and the customer cases closed.
- Scenario 2 — Measure outcomes, such as the response time (i.e. the amount of time it takes for reps to answer the phone or respond to a chat query,) customer satisfaction and Net Promoter Scores, and customer retention.
Imagine you’re a newly hired representative on this team, and you’re meeting with your boss to set your goals.
- Scenario 1 — Your goals include responding to a certain number of customer queries per day. You’re advised that in order to meet this number, you need to ensure that each customer contact is completed within a certain number of minutes, otherwise, you won’t achieve the daily goal.
- Scenario 2 — Your goals are presented in the context of a larger team goal, which is to continually reduce the team’s response time, and increase the customer satisfaction scores. Your boss also mentions team rewards if quarterly customer retention numbers are met, and an annual bonus if the company meets its annual customer satisfaction and retention goals.
Both of these sets of measures require employees to respond swiftly to customer queries in order to succeed. However, whereas one team will be focused on the amount of time spent on a query, the other team will be focused on ensuring customer satisfaction. It comes down to a punitive mindset when it comes to individual goals or a positive one. Which do you believe would deliver better outcomes?
Nose to the grindstone, or time to think?
Let’s imagine another scenario, in this case, involving a specialist team, such as a group of product managers, software developers, or marketers, whose daily remit is usually a blend of team projects, individual contribution, administrative tasks, and independent research. The team has been working remotely for four months.
- Company A has implemented corporate spyware for its employees and advises everyone that activities on their computers are being monitored.
- Company B does not monitor employee activity.
Let’s imagine both companies have clearly defined goals that are based on outcomes achieved. What behaviors do their respective systems reward? Which company would you bet produced better results? Consider what some recent research has found regarding productivity.
More breaks = better productivity
A software startup called the Draugiem Group conducted a study using their time and productivity tracking software, and came to a startling conclusion about productivity: what matters isn’t the length of the workday. Instead, a more important driver of productivity was how individuals structured their days. Specifically, the workers who routinely took short breaks were far more productive than peers who kept their noses to the grindstone for longer periods of time.
How is this even possible?
Our brains burn a lot of energy, and staying focused requires effort. The study found the ideal split between work and rest was 52 minutes of work, followed by 17 minutes of rest. Those who kept that schedule achieved a much higher degree of focus in their work, staying 100% dedicated to the task at hand. When they felt fatigued, they took short breaks, separating themselves entirely from work for a short period, before starting another highly productive hour.
Their peers, on the other hand, diluted their focus by multitasking, checking social media, or allowing themselves to be distracted by email. On paper, they showed greater “inputs” (time spent at their desk) but in reality, they accomplished less.
There’s a biological basis for this. Our brains are wired to work in bursts, followed by short rests. And herein lies a problem with corporate monitoring: if employees feel they have to keep up constant activity, their actual productivity will degrade.
Time spent focused on a task represents “deep work,” which is defined by productivity pioneer Cal Newton as “the ability to focus without distraction on a cognitively demanding task.” Developing the ability to do “deep work” without distraction will increase one’s productivity significantly. However, employees who know they’re being monitored are less likely to take the sort of breaks that improve thinking and performance and are less likely to get into “deep work’ mode.
We’ve tried it, and it works
The Engagement Multiplier team are enthusiastic experimenters, and I’ve encouraged them to think of their workdays in several different ways.
First, I introduced them to the Entrepreneurial Time System pioneered by Dan Sullivan of Strategic Coach ®, comprising Free Days, Focus Days, and Buffer Days. In a nutshell, Free Days are time off. Focus Days are days dedicated to heads-down work, with 80% of your time spent on the important functions that really matter in your role. Buffer Days are for cleaning up loose ends, resolving issues, routine meetings, administrative details, and, most importantly, preparation so you can be really effective on your focus days, and not get bogged down with too much ‘stuff’. By approaching one’s week in this way, one can align tasks with days, ensuring plenty of time for deep work, whilst protecting the free time that is so important for recharging.
Our team also works in “sprints” during the day, creating focused periods of work interspersed with multiple short, re-energising breaks (taken with my blessing!) throughout the day. I can personally attest to the productivity gains a team will realise by employing these practices.
Downsides to monitoring
While monitoring can help improve performance, there are downsides to too much monitoring and process optimization. According to an NBC article titled, “Is Constant Corporate Monitoring Killing Morale?” too much monitoring can have the following effects:
- Forcing employees to work in similar styles, irrespective of factors such as time of day or different individual strengths,
- Employees feel devalued, like they’re “just a number,’ and ignoring their subjective contributions,
- Adding stress and continual fear of reprimand or firing due to not meeting numbers.
Done poorly, monitoring can prevent employees from doing their best work, by encouraging them to focus on the behavior that is being monitored, reducing the energy and effort they could be applying to deliver results. At its worst, in addition to degrading productivity and morale, and increasing employee turnover.
Outcomes, not outputs
Doubling down on micromanagement and intensively monitoring employees also reinforces distrust between managers and their teams. The subtext of intensive monitoring is this: Management doesn’t trust you to do your jobs.
I firmly believe that now, more than ever, it’s time to trust your team to do the right thing and deliver to their pre-agreed outcomes. Anything you do that undermines that (such as spyware or micromanagement) is akin to treating them as though they are incorrigibly lazy.
Trust them, celebrate accomplishments and progress, and watch productivity, job satisfaction and happiness rise – it’s an incredible ROI – and cheaper than destructive software!
To succeed in the new normal, in which remote work becomes more the norm than the exception, leaders should be thinking about how to entirely reframe how they define and measure productivity.
To shift their mindsets, leaders should consider:
- Focusing on achieving business outcomes, versus measuring outputs. For example, instead of measuring the number of whitepapers a marketing team publishes, or the number of features a product team develops, measure the business outcomes those activities were intended to generate, such as sales qualified leads or media mentions in target outlets for the whitepaper, and revenue or adoption rate for the product feature.
- Embrace productivity skills, and encourage employees to educate themselves on their workstyles, deep work, and other skills to help them work better and smarter – not more.
- Set clear, objective-based goals, track progress, and communicate openly and transparently about those goals — this will keep employees focused on the objective, and give managers confidence in their teams.
- Assess your leaders’ abilities across the seven dimensions of leadership required to successfully navigate rapid change and the emerging new normal, and take focused action to help them strengthen necessary skills using the free Leadership Perception Gap survey tools from Engagement Multiplier.
Whether or not they can embrace the necessary changes in leadership style and mindset this moment in time demands will determine whether or not a business recovers and finds success in the post-Covid era. To support the business community, I’m putting purpose before profits and am making Engagement Multiplier’s Leadership Perception Gap survey (along with our core Benchmark Assessment) free for leaders who want to ensure their business are adjusting to the demands of the day and are on track for success. | https://www.engagementmultiplier.com/resources/monitoring-employees-productivity-booster-or-buster/ |
6 Ways Leaders Can Improve Team Productivity
A leader is responsible for ensuring that their team remains efficient and productive—even when facing unprecedented challenges. By implementing modern management techniques that are intended to improve time management, reduce employee stress, and enhance the quality of meetings, team leaders can encourage higher levels of productivity from each team member, boosting the overall efficiency of their organization’s operations. To reach and sustain this higher level of performance, leaders should develop a diverse leadership approach which leverages the following techniques, each of which can impact workplace productivity by enhancing the team’s daily routines.
Pomodoro Technique
Francesco Cirillo developed the Pomodoro Technique as a time management method used to break the work day into short intervals, helping to limit procrastination and boost productivity. To achieve the best results with the Pomodoro Technique, leaders are encouraged to instruct all of their employees to follow its core guidelines, which are select a task, work on that task for a set interval (Cirillo recommends 25 minutes), and take a short break (one to five minutes) before getting back to work. Once four full Pomodoro intervals have been executed, a longer break, ranging from twenty to thirty minutes, should be taken, giving employees time to restore their energy and realign their focus with new objectives if necessary. Having these scheduled rest breaks built into the planned workday encourages employees to work faster during times of production and helps to eliminate time-consuming distractions, ensuring that more can be achieved in the day. Furthermore, utilizing the Pomodoro Technique allows leaders to better understand, as well as to help their employees to understand, exactly how time is being spent and also how it can be spent more productively. By monitoring how employees’ work while using the Pomodoro Method, leaders can also provide suggestions on how their employees can improve themselves.
Focus Attention
To maximize team productivity, it is important that employees focus their attention directly on completing important tasks first. This concentration requires clearing the mind of distractions and limiting the number of external disturbances that can interrupt a workflow, especially as unfocused work can lead to mistakes, wasted time, and a drop in overall productivity. One means to encourage focus is incorporating time and stress management tactics like the Pomodoro Technique or meditation to employee routines. Through methods like these, leadership personnel can present their staff with the cognitive tools required to keep them focused at work.
Leaders are also responsible for streamlining the workflow of their subordinates as much as possible in order to achieve maximum productivity. Using a straightforward, streamlined production process, leaders can develop a team-wide sense of urgency that keeps productivity focused on the highest priority tasks within each project, program, or initiative. Doing so is best accomplished by deliberately allocating time to specific tasks and putting a detailed plan in place for the day. These plans are most effective if they hinge on realistic short-term goals that promote the most efficient use of time, labor, and attention. By creating such explicit plans of action, leaders can keep employees focused on necessary activities and improve the overall efficiency of their organization.
Encourage Physical Activity and Time in Nature
A Stanford University research study discovered that the act of walking encourages the production of more creative thoughts and responses. As the generation of creative ideas is an integral part of improving team productivity and helping solve problems that may be hindering progress toward team goals, leaders should encourage their teams to take time to walk outside to get their creativity flowing in a productive direction. Productivity can also be enhanced by introducing a low-energy form of exercise, which pumps endorphins into the bloodstream and ultimately reduces work-induced stress. Even when people spend time only sitting or standing in a natural setting, they may still experience some of the scientifically proven benefits of being in nature, like improved concentration and increased short-term memory. In pursuit of recognizing these benefits, leaders should incorporate forms of activity into the daily routines of their teams by encouraging outdoor lunches or scheduling team walks.
Stand-Up Meetings
Richard Branson, who founded the multinational conglomerate the Virgin Group, believes that when compared to long-running seated meetings, standing and walking meetings inspire greater creativity from attending staff. During standing meetings, attention is more focused on the people in the room, rather than mobile devices, laptops, or other common office distractions. Without these distractions slowing down the meeting process, business topics can be discussed more precisely. Leaders can amplify team productivity significantly by learning how to save time using brief standing meetings in their workplace communication routine.
Open Communication
For a leader, constructing open lines of communication between team members and management is a critical aspect of maximizing workforce productivity. Building a culture of open communication ensures that leaders are constantly sharing and receiving feedback on their firm’s performance, while also inviting employees to provide valuable constructive feedback about the performance of their leaders as well. Although it is impossible for management staff to be directly involved at every single level of a business, regular communication with employees allows leaders to gain necessary insight into the operations of each department. The information received through regular and open communication can be used to develop new, more effective practices, helping employees understand that their voices are being heard and that their ideas are important to the success of the organization.
To encourage more communication in the workplace, leaders should invite employee input during meetings, implement a system that carries suggestions to the top levels of management, and provide consistent updates on projects that concern the interests of the workforce. When a communicative approach is implemented in a leadership strategy, the entire organization can operate more smoothly to achieve successful progress towards goals.
Organize Team Activities and Outings
Team productivity can be negatively influenced by staff members who resist communicating, cooperating, and exchanging trust with their colleagues. Facilitating opportunities to develop strong interpersonal relationships is important to productive leadership, as these relationships can produce trust, one of the most important traits of a productive team. To this end, leaders should promote team activities, staff outings, and general team building exercises in order to foster strong relationships between team members and members of upper management.
While there are many different types of team-building activities that can be utilized to improve productivity, choosing an effective activity should be done with the intention to improve a specific area in which some team members may be lacking. Competitive games or athletic events may incite a friendly spirit among employees, encouraging them to do their best to earn achievements in the workplace. Office celebrations for holidays or other special events may allow team members to experience the more social aspects of their colleagues’ personalities, allowing them to connect with them in a new manner. The ultimate goal for leaders when organizing such activities is to help their teams become more reliant on one another, permitting more opportunity for higher workforce productivity.
When a leadership strategy attentively focuses on improving productivity through communication, collaboration and time management tactics, an organization can experience tangible progress towards its goals. For current and aspiring leaders, developing the skill set and knowledge for helping an organization optimize productivity through its employees can be learned through a Master of Science in Leadership program.
Learn More
As the nation’s oldest private military college, Norwich University has been a leader in innovative education since 1819. Through its online programs, Norwich delivers relevant and applicable curricula that allow its students to make a positive impact on their places of work and their communities.
Norwich University’s online Master of Science in Leadership program is designed to help you demonstrate the skills and knowledge needed to lead teams and inspire progress, while also growing your career. The program is practical and pertinent, allowing students to apply leadership concepts immediately to their careers.
Recommended Readings:
What is Change Management Consulting? | https://pro.norwich.edu/academic-programs/resources/6-ways-leaders-can-improve-team-productivity |
Until recently, when I needed a break I’d grab my phone.
Until recently, when I needed a break I’d grab my phone. Whether I was bored, mentally fatigued, or just wanting a pick-me-up, I felt relief checking the news, Facebook, or Instagram.
However, new research suggests there are good ways and not-so-good ways to spend our break time. While some breaks can leave us refreshed and reenergized, others tend to leave us depleted and drained.
In their book “The Distracted Mind: Ancient Brains in a High-Tech World” Dr. Gazzaley, a neuroscientist, and Dr. Rosen, a psychologist, explain that good breaks can reduce mental fatigue, boost brain function, and keep us on-task for longer periods. But Gazzaley and Rosen forewarn that taking the wrong sort of breaks might make us more susceptible to boredom and may actually backfire by making us want to take breaks more often.
In other words, repeatedly checking our phones when we get a tad bored can train us to check more often throughout the day.
So by reaching for our phones when we want a break, we may be training ourselves to do it again and again. In order to resist the onset of boredom and self-interruption at work, Gazzaley and Rosen suggest we avoid our smartphones and instead take breaks that restore the part of the brain we use to keep focused on our goals.
Located right behind the forehead, the prefrontal cortex is considered the most evolved portion of the human brain. Although it has many functions, goal management is its main business. The prefrontal cortex orchestrates attention, working memory, and other cognitive resources in order to help us get what we want.
For example, if my goal is to cook dinner tonight, my prefrontal cortex will help coordinate my brain functions to guide me through the actions needed to complete the meal like navigating the grocery store, following a recipe, and cooking the meal, all while making sure I don’t get sidetracked.
When we work, our prefrontal cortex makes every effort to help us execute our goals. But for a challenging task that requires our sustained attention, research shows briefly taking our minds off the goal can renew and strengthen motivation later on. Doing activities that don’t rely heavily on prefrontal cortex function but rely on different brain regions instead, is the best way to renew focus throughout the work day.
Research shows that nature exposure is restorative for the mind. One study reported better working memory scores after a walk in a natural environment, but not in an urban environment.
Work in a city? You don’t have to go far to benefit from nature. Just noticing the sights and sounds of natural features around you can help you recharge.
If you are stuck indoors, look at some pictures of nature instead, as research shows they work too. Or try tuning into nature videos on your computer for a few minutes; like a tropical beach or a mountain creek.
Having a moment with ‘nothing to do’ is rare these days. We dodge even the briefest moments of potential boredom with just a few swipes.
However according to Gazzaley and Rosen, avoiding occasional periods of ‘nothing to do’ downtime may have some unintended effects.
When we let our minds wander without focusing on a certain goal, the brain’s default mode network takes over. Daydreaming and doodling tap into default mode network activity and may give some prefrontal cortex functions a rest.
Sit alone, set an alarm for 10 minutes, take a deep breath and be patient. If you need a little help, try the website Do Nothing for 2 Minutes for a quick session.
You can also practice mind wandering in your daily life by keeping your phone in your pocket and letting your mind drift while you wait at a crosswalk, a train station, or in an elevator lobby.
Our eyes bear the burden of our tech-charged lives. Fortunately, doing 20–20–20 eye breaks is a straightforward way to alleviate eye strain and fatigue.
Every 20 minutes, stare at something 20 feet away, for 20 seconds. Gazzaley and Rosen explain the reason why this is beneficial is that it “…requires blood flow to brain areas that are not related to sustained attention.” The shift in blood flow across certain brain regions may be the reason why eye exercises are restorative.
Laughter packs a punch. It increases heart rate, respiration, and it gets our blood pumping as our upper body muscles are recruited into the action. Although evidence of long term benefits of laughter is debated, short term effects show some improvements on memory tests.
Spontaneous crack-ups, and forced giggles from laughter yoga or cheesy jokes by google assistant, all have the same perks.
Listen to a comedy podcast or a stream a comedy radio station. Read the comics section in the newspaper in the breakroom. Or keep a funny book at the office to help you get through the next afternoon slump.
We all know regular exercise benefits the body and the brain. The good news is that even short bursts of exercise are helpful for cognition. Just 10 minutes of physical activity can boost attention and memory performance.
Find a secluded space to do a 7-minute workout, do some pushups or planks, or just take a brisk walk around the block. A little physical activity is a great way to rev-up your brain without breaking a sweat in your work clothes.
The bottom line is breaks should make you feel better by providing a renewed sense of focus and concentration.
As digital detoxes and tech-free zones rise in popularity, we are beginning to value the benefits of removing technology every now and then. Taking better breaks can encourage creativity and increase focus by relieving the prefrontal cortex of some goal management duties for a while.
The next time you need to take a break at work, ignore your smartphone and skip the newsfeeds. Choose an activity that is restorative so you feel refreshed and more prepared to tackle the rest of your day.
Taking good breaks is important for your daily productivity.
Breaks reduce fatigue, alleviate boredom, and can restore attention.
Using tech during our breaks may backfire and make us more susceptible to boredom and want more breaks, more often.
Restorative breaks can improve attention and refresh our focus.
Originally published at www.nirandfar.com on March 30, 2017. | https://thriveglobal.com/stories/research-reveals-how-to-take-a-better-break/ |
Do you think that you are able to manage your schedules well to make some time out for yourself or for your hobbies? If your answer is “Yes”, then we would like you to share your strategies with us. If your answer is “No”, then you may want to identify the time-stealers that may be preventing you from sticking to your schedule. Have you tried to find out how much time you typically spend in a day on Twitter or Facebook? Have you ever tried to analyze how emails interrupt your regular work flow? Have you realized the fact that people who are good time managers can manage their attention well and maintain their focus?
When you are able to focus your attention on the task at hand, you are able to complete it on time and start off with the next task in your schedule. Here’s how you can practise a different work approach to manage your time in an efficient manner:
- Try Out Work Rotation: Create a to-do list and allocate five minutes of time for each activity. For example, you can begin with checking emails, then shift to report preparation, and then engage in social media postings. When you are done with your social media tweets, you can again come back to checking emails and now, you will allocate ten minutes for each activity. When you complete a certain task, you can scratch it off from your list. As you go on repeating the cycle, increase the duration of each activity. When you do an activity for longer durations, you start feeling more focussed.
- Delegate: You can consider delegating some tasks to your team. However, you have to make sure that the individuals who will work on your task have the skillset to complete the task in the desired manner. Otherwise, you will end up correcting their errors with your schedule going for a toss.
- Take Breaks: While taking breaks too often may seem to be a major concentration-breaker, you may actually stay more focussed with short bursts of work. For example, if you are writing a blog and finding it difficult to focus on your writing for quite some time, you may consider taking a short break. Write for 25 minutes and then take a 5-minute break to refresh your thoughts. Have a cup of coffee or check your Facebook posts and then resume your work. When you know that you will take a break soon, you will be more likely to focus on your ongoing work.
You can maintain a log book to keep a track of your time utilization. This will allow you to identify the activities that are consuming most of your time. This will allow you to plan your schedules better and to complete diverse tasks within deadlines. If you have tried out any other ways, do write to us. | https://www.sheathunderwear.com/blogs/editorial/not-so-common-ways-to-manage-your-time-well |
Bird Marella attorneys Gary Lincenberg and Peter Shakow authored the article, “A Secret No More – The Rise of Economic Espionage Prosecutions and How to Litigate Them,” published by Criminal Justice magazine.
The article provides an historical overview of the Economic Espionage Act of 1996, reviews its provisions, and details the factors that have contributed to the recent growth of trade secret prosecutions under the Act. Mr. Lincenberg and Mr. Shakow go on to describe and analyze proven strategies – and a number of more novel ones – used to defend against economic espionage and trade secret theft charges in recent years.
Click here to read the full article. | https://www.birdmarella.com/news-insights/news/gary-lincenberg-and-peter-shakow-author-article-on-economic-espionage/ |
What about me?
Date2012-08
Author
Cluster, Nicholas Anton
MetadataShow full item record
Abstract
There are few studies that investigate successful African American males in mathematics. Using phenomenology and narrative inquiry as research approaches, I gave 11 African American males who excelled in mathematics the opportunity to discuss the experiences that contributed to their mathematical success. The purpose of this study was to identify the factors and experiences that contributed to their success in school and mathematics and also identify any challenges these young men faced and how they overcame them. Data was collected using biographical questionnaires, an on-line discussion board, and individual closing interviews. Thematic analysis (Braun & Clarke, 2006) was then used to analyze the data. The participants identified several personal and schooling factors as having an impact on their mathematics achievement. Personal factors such as parents, mentors, peers, and siblings were identified as being major contributors to their success. The schooling factors included high expectations from teachers, academic environment, and academic enrichment programs. | https://athenaeum.libs.uga.edu/handle/10724/28231 |
The scientific landscape surrounding amyotrophic lateral sclerosis (ALS) continues to shift as the number of genes associated with the disease risk and pathogenesis, and the cellular processes involved, continues to grow. Despite decades of intense research and over 50 potentially causative or disease-modifying genes identified, etiology remains unexplained and treatment options remain limited for the majority of ALS patients. Various factors have contributed to the slow progress in understanding and developing therapeutics for this disease. Here, we review the genetic basis of ALS, highlighting factors that have contributed to the elusiveness of genetic heritability. The most commonly mutated ALS-linked genes are reviewed with an emphasis on disease-causing mechanisms. The cellular processes involved in ALS pathogenesis are discussed, with evidence implicating their involvement in ALS summarized. Past and present therapeutic strategies and the benefits and limitations of the model systems available to ALS researchers are discussed with future directions for research that may lead to effective treatment strategies outlined.
Keywords: FUS; TDP-43; amyotrophic lateral sclerosis; cell models; disease mechanisms; missing heritability; therapeutics.
Mejzini R, Flynn LL, Pitout IL, Fletcher S, Wilton SD, Akkari PA. ALS Genetics, Mechanisms, and Therapeutics: Where Are We Now? Front Neurosci. 2019 Dec 6;13:1310. doi: 10.3389/fnins.2019.01310. PMID: 31866818; PMCID: PMC6909825. | https://www.extivita.org/als-genetics-mechanisms-and-therapeutics-where-are-we-now/ |
Describe concepts of profession as it relates to advanced nursing practice.
Analyze the historical factors that contributed to the development of advanced nursing practice roles.
Explain issues related to the regulatory and credentialing requirements of advanced nursing practice roles.
Analyze the various roles of the advanced practice nurse.
Describe the challenges of professional relationships faced by the advanced practice nurse.
Explain professional networking as it relates to advanced practice.
Explain the role of professional nursing organizations.
Analyze novice to expert clinical practice issues of the advanced practice nurse.
Explain the characteristics of direct clinical practice.
Analyze the competencies that define advanced nursing practice.
Distinguish between personal marketing and marketing of the advanced nursing practice role.
Analyze issues related to marketing, contracting, and reimbursement for the advanced nursing practice.
Explain implications of future trends on the role of the advanced practice nurse. | https://www.phoenix.edu/courses/nrp505.html |
Frequently Asked Questions
What is the purpose of the Facility Master Planning Process [FMP]?
- The primary purpose of this planning process is to ensure that Athens City Schools facilities have the ability to meet the district’s educational needs today and in the future. This will involve facilities adaptable enough to accommodate a variety of educational approaches and strategies.
- The facility planning process is a systematic approach for making educational and facility decisions that will have a long-term impact for Athens Schools.
Why is a Facility Master Planning process being done?
- In order to protect our residents’ investment in our schools, we analyze the way we do business to ensure we deliver on the quality and efficiency our community expects, including our school buildings.
- Athens Schools student demographics change over time and these changes in student needs, (such as location, technological demands, etc.) impact the requirements we have of our school facilities.
- The district engaged construction and school facilities experts to analyze the efficiency of our school buildings – both from a financial and educational standpoint. The research revealed that our buildings, ranging in age from 47 to 95 years old, are in need of some minor maintenance, while others could use extensive renovation or even replacement.
- Outside of necessary improvements to facilities in the interest of student and staff health, there are various potential added benefits to programming and overall educational quality. The US Department of Education has provided research regarding the benefits of a quality learning environment on students which include improved learning outcomes, increased attendance levels, attraction and retention of teachers, and happier and healthier teachers.
How long will the Facility Master Planning process take?
- The analysis, community engagement and planning portion of the FMP is scheduled with milestone dates beginning April 2018 and ending October 2018. However, the Facility Master Plan is a document that will evolve over time with changes in demographics and other factors that impact facilities.
- In addition, the Athens City School Board has hosted public forums and participated in public joint city council/school board quarterly meetings for the past two years. These meetings have hosted conversations on similar topics discussed in this facility master planning process, and have contributed to the current process.
How will facility Options be explored and developed?
- Options will be explored and developed by reviewing and analyzing
- *Historical and projected district educational and facility information
- *Building condition information
- *Overlaying Educational Framework on individual schools
- *Soliciting input from District Level and community members
Will attendance boundaries change?
- It is possible that boundaries may change. However, until all information has been vetted through the Options exploration phase, it is uncertain what the best course of action will be for each facility.
How will any new buildings or building improvements be funded? Are my taxes going to increase?
- The funding mechanisms for these types of projects would likely require a general obligation bond. Sources to repay this bond may be
- *Property Tax Revenue
- *Sales Tax Revenue
- *Revue realized by operational cost savings.
- *General Fund
- The amount and extent of these resources used will be determined by decision makers. The final solution will likely be a combination of sources.
What happens if we do nothing?
- Many of the facilities are in poor physical condition and are in need of repair. If no action is taken, some building systems will fail and require unpredicted emergency repairs. This may result in lost instructional time and immediate raising of funds.
- The schools will continue to operate inefficiently with operational expenditures increasing.
- The following is a list of the most pressing and essential deficiencies of Athens buildings.
When is the soonest any of these options could be implemented?
- Once funding is secured, these types of construction or renovation projections take between 24 and 36 months. It is unlikely that any new or renovation facilities would be ready before the 2020-21 school year.
How will consolidating into fewer facilities reduce operational expenses if we are still going to have the same number of students?
- The operational savings will likely be in the form and administration reduction. With fewer buildings to administer, there will be a need for fewer principals and other support staff. The number of teachers would be anticipated to stay at current levels because the number of students will remain constant.
- The cost savings may be realized immediately, because the reduction in administration may be phased in over a few years as the workforce retires.
- There will likely also be utility costs savings due to increased efficiency in modern buildings.
If there are schools that are no longer needed at the end of a building program, what will happen to them?
- There are a variety of uses for a retired school facility, examples include: reusing the facility for another city program, selling the facility, or demolishing the facility and using the site as park space. A dedicated process is required to determine final site use, but neither the city or the school system want to see old facilities boarded up and left vacant.
Regarding Option 1: What would the traffic impacts be if all students are located at the City Park/Athens Middle site?
- A traffic study was conducted by the firm CDM Smith to address this question. It found that while there would be new challenges in terms of vehicle congestion, there are options on how best to mitigate this. Recommended mitigation techniques include:
- *The construction of various turning lanes
- *Re-striping to implement turn lanes where the road is wide enough
- *The removal on-street parking and curbs to provide three-lane sections of road.
- While the traffic study provides various options on how to execute these actions, the anticipated overall cost is roughly $1 million.
Can the community do a fundraiser for improvements?
- Our community can choose to do fundraisers and designate that money as it chooses. For example, a community fundraiser could be implemented to raise money to repair/replace the lighting/sound board at ACMS which has a cost of over $200,000.
If a school were to have several floors, where would the younger students be located?
- State law requires that grades 1 and lower be on the ground floor of a school building. The Athens City Schools two-story school concept had grades PK, K, and 1st on the ground floor and grades 2-5 on the top floor. | http://www.dejongrichter.com/athensfmp/home-page/frequently-asked-questions/ |
Zailani, Zainal A.
Mohd Fidzwan, Md Amin
Hadi, Hasnul A.
MetadataShow full item record
Abstract
Many organizations are struggling to improve customer-focused quality in today's highly competitive domestic and global markets. At the same time, these organizations have failed to implement the PDCA methodology into their daily control and strategic planning processes. Therefore the purpose of this thesis is to solve the bucket backhoe waste problem from cutting process by using Plan-Do-Check-Act methodology P-D-C-A. The current situation due had contributed by increased quantity of waste from thus process. Project objectives were set in Plan phase to indentify the waste and the product defect. The baseline data was collected from different process in the Do phase to identify the current possible causes from machine, man, material and method factor that effect to the product. In Check phase, results will justify the relationship of vital few causes to the bucket backhoe process, which led to the waste. In Act phase, the solution was chosen to reduce the waste and improve the performance of process by considering factors such as performance, efficiency and cost. Analyze and suggestion some design concepts were carried out to determine the best solution or options on the acetylene Oxy cutting process and implementation checklist to control and provide a guideline to assist worker and equipment as to minimize the waste and defect or any problem occur that required to review for acetylene Oxy cutting process Beside that improved were institutionalized with proper training, documentation, and provide handbook as a guideline reference. The result from the project has provided an actual successful deployment of Plan-Do-Check-Act methodology PDCA with application of design a new apparatus, several of statistical tools and techniques like pareto diagram and fish bone diagram, as the systematic problem solving framework on solving other process issues. | http://dspace.unimap.edu.my/handle/123456789/43761 |
First-generation college students are students whose parents do not have a college degree, and they face numerous barriers in college. Yet, several first-generation college students (FGCS) are successful and are on-track to graduate with a bachelor’s degree in four years. Their success is important because education is associated with increased income, quality of life, and social mobility, making educational attainment even more significant in Arkansas, which has both low educational attainment and high poverty. Little is known about what can be done to close the achievement gap. It is important to analyze what helped FGCS succeed so that higher education administrators, faculty, and staff can help other FGCS succeed.
The study used explanatory sequential mixed methodology to analyze the factors first-generation college students identified as contributing toward being on-track to graduate in four years. Data for the study were collected at the University of Arkansas, an Arkansas land-grant institution. Descriptive statistics and Pearson’s chi-square of independence test were used to analyze first-generation students. Focus groups of FGCS were conducted to understand the factors that contributed to being on-track and strategies for success. The study’s results indicated that ethnicity and changing the major college of degree program are not related to being on-track to graduate, but other demographic factors like age, residency, and ACT score are significant. FGCS faced multiple barriers like unpreparedness, financial obligations, and relating to their family members, but they were motivated to succeed by many factors, primarily believing that a college degree was necessary for a better life. They used a few strategies to succeed, such as active involvement in planning their course of study to maximize efficiency. Recommendations for both future research and future practice were made to help first-generation college students succeed. | https://scholarworks.uark.edu/etd/2420/ |
In a world of climate change and headline-grabbing cyclones, El Niño is one of the most unspoken climate risks in East Asia and the Pacific.
The word "convenience" in the single-use plastics context, needs urgent redefinition, especially in the East Asia region where several countries top the list in terms of plastics leaking…
An increasing number of countries have begun to use behavioral sciences to shape public policy, and Brazil is among them.
Rosie the robot was built to analyze the public expenses of Brazil’s congress members and empower citizen demands for social accountability. Read to learn what factors contributed to Rosie's…
Rosie the Robot was built to improve social accountability in how members of congress in Brazil spend public funds.
Informed decision making requires timely and relevant evidence. This holds for national decision makers as well as development practitioners. Here at the World Bank...
Major cities across Latin America are taking concrete action to promote cycling and become more bike-friendly. Given the urgent need to reduce emissions from urban transport, this could serve as a…
Even in the most challenging places, investment and growth are possible. And of all the places most in need of development, the Sahel must sit near the top of the list. | http://blogs.worldbank.org/search?f%5B0%5D=countries%3A60&f%5B1%5D=countries%3A140&f%5B2%5D=countries%3A164&f%5B3%5D=countries%3A267&f%5B4%5D=date%3A2014&f%5B5%5D=date%3A2017&f%5B6%5D=date%3A2019&f%5B7%5D=language%3Aen |
Lukwaro,, Elia Ahidi Elisante (2017) A Framework for Adoption of ICT By Traffic Police Force in Vehicle Inspection and Monitoring Automation: A Case Study of Tanzania. Masters thesis, The Open University of Tanzania.
Abstract
The number of road accidents caused by vehicles in the city of Dar es Salaam is very alarming. One of the major causes of road accidents is vehicle defects and speeding, which is largely contributed by a lack of digitally-enhanced systemic mechanisms for vehicle inspection and monitoring. To curb down vehicle crimes and accidents, automated enforcement need to complement road safety laws through adoption of ICT innovations. Accordingly, this study aims to identify critical success factors and to develop a framework for adoption of ICT in vehicle inspection and monitoring. To achieve this goal, the Ground Theory Approach (GTA) was used to carry out the study. The selection of GTA based on the nature of the phenomenon and complexity in obtaining relevant data. The exploratory and case study strategy via field interview, observation, Focus Group Discussion (FGD), and questionnaires are used for data collection. The GTA was deployed to analyze the collected data, the via open, axial and focused coding phases. The key findings of the study reveal that the critical success factors for facilitating ICT adoption for law enforcement in vehicle inspection and monitoring include ICT Policy and Strategies, Skilled Human Resources, Interoperability, Top management support, Financial support and Investments on ICT, Infrastructure development, Availability of ICT equipment (Hardware and Software), Security and Privacy measures and Staff Awareness and Training (capacity building). Based on these findings, the framework for adoption of ICT in law enforcement was developed. The study then concludes with a working model that illustrates the implementation of ICT by traffic police force in vehicle inspection and monitoring. | http://repository.out.ac.tz/2426/ |
Akemi Solloway will be hosting a special event this Thursday revolving around the the History Of The Kimono as part of a fundraising exercise for Aid For Japan.
The presentation will explore the history of the traditional Japanese attire through to the present day, including the fusion of the kimono in current fashion.
The event takes place at Strawberry Hill House, the Gothic-style villa developed by author Horace Walpole in the 1700s. Noted as an internationally famous example of Georgian Gothic revival architecture, Strawberry Hill underwent a £9 million 2-year restoration project in recent years.
The History of the Kimono takes place on Thursday 29th September at 19:00. Tickets are priced at £20 each and can be ordered directly from the Strawberry Hill website.
All proceeds from this fundraising event will be split 50/50 between Aid for Japan and The Strawberry Hill Trust Charity. | https://www.aidforjapan.co.uk/the-history-of-the-kimono-2/ |
Jessica Craven (Saitama)
Some of Japan’s oldest surviving textiles, dating from the 8th century in the Nara period, are contained at Tokyo National Museum’s Gallery of Horyuji Treasures. The collection primarily contains ban, or Buddhist ritual banners, from this era. Even though these early designs are rather simple, if you look closely you can discern the intricate attention to detail and craftsmanship that is characteristic of other Japanese products, such as its renowned stationary. While the designs consist of only one color, very delicate and complex textural patterns have been skillfully woven into the fabric.
Until now, I had admittedly never really thought much about Japanese textiles. Like most foreigners (and perhaps even some Japanese people), I only ever really thought of kimono when I thought of traditional Japanese textiles. And while many pieces of kimono-inspired contemporary clothing are made today, they are scarcely designed with the same level of craftsmanship and elegance that the traditional kimono are. For the most part, even clothing inspired by traditional Japanese fashion is today made in a factory. This led me to wonder… aside from kimono, are there any traditional textile techniques that are being preserved in Japan today, and are they being modernized to suit contemporary taste and practical wear?
Although some unique weaving practices are still thriving in Japan, what really continues to flourish and evolve in the textile industry is a variety of dyeing techniques with long histories. One of the most significant of these is indigo dyeing. Indigo dyeing is literally everywhere — even in blue jeans (although these are practically all chemically and mechanically dyed now) — and yet we practically never think about it. One of the most accomplished traditional indigo dyeing studios still in existence today is Hiroshi Murata’s Indigo Dyeing Studio, Kosoen.
Mr. Murata was kind enough to let me interview him and take a tour of his studio in Ome City, a beautiful place with dedicated craftsmen at work and the sunlight dappling in. I was able to ask him many questions about the history of indigo dyeing, or aozome, its process, and its future.
Jessica Craven: What is Ome City’s history with aozome?
Hiroshi Murata: Textile production has prospered in this area since the 13th century, including the introduction of traditional indigo dyeing during the Edo period (when the process first began in Japan). Demand for textiles, such as bedding, skyrocketed in the post-war period, and Ome City provided close to 50% of the national demand. This studio, which was established in 1919, greatly contributed to that. I inherited the family business in its third generation, and still continue the tradition of indigo dyeing today. Since inexpensive textile imports have increased dramatically, hundreds of companies have abandoned this practice, so Kosoen is one of the only studios of its kind today, and the only one left in Ome that still uses a completely traditional process. Kosoen uses indigo leaves that are grown and fermented traditionally in Kochi prefecture.
JC: So indigo dyeing began in India and spread to many other countries, right? What sets Japanese indigo dyeing apart?
HM: My studio utilizes more thin lines and delicate patterns. Also, the exact fermentation process has evolved differently in Japan, and this results in a unique shade of blue. It involves two different and separate fermentation processes. The first is the process of making indigo leaves into sukumo, which is the raw material that is made into dye. The second fermentation transforms sukumo into indigo dye. Then the actual fabrics are dyed using many techniques similar to the ones in ukiyo-e. The process from start to finish takes several years, but the dyeing itself takes much less time, although careful attention to detail is required.
JC: What inspired you to continue the indigo dyeing tradition?
HM: As I said, I inherited my family business, but actually I changed it dramatically in 1989. I was convinced that only a return to the quality of Edo-period indigo dyeing would allow our business to continue to prosper under fierce competition from cheap imports. So, we went to Tokushima prefecture to learn traditional indigo dyeing techniques. I wanted to revive these refined techniques and make them known to the rest of the world.
JC: What are you doing to modernize indigo dyeing and make it appealing to people today?
HM: One appeal is our use of only natural techniques. Our products are completely devoid of harmful chemicals, so they are good for the environment and the people who wear them. There has also been renewed interest in the revival of traditional craftsmanship in Japan. When we first revitalized our business in 1989, our sales were small, so we only made small things like table centers, coasters, and other interior household items. However, about ten years ago we gained more popularity have begun dyeing and designing clothing, so that gives us ample opportunity to both continue traditional designs and modernize them by dyeing more contemporary style clothing.
Mr. Murata’s indigo Dyeing Studio Kosoen has certainly gained commercial success, receiving a significant number of overseas orders and more foreign visitors than ever before. He has even received invitations to exhibit in Germany and Canada, as well as to sell products at the Museum of Modern Art in New York. In spite of this, his studio was featured in “The Wonder 500,” a list of certified products selected to be “local products that are the pride and joy of Japan but not yet known outside of Japan.” Let’s change that! Mr. Murata’s passion for his craft allows him to create astonishingly beautiful works of art, so please check them out online, if only to gain a deeper appreciate for the art of indigo dyeing!
Jessica Craven is an ALT in Saitama prefecture. She has degrees in both visual art and Japanese, so she enjoys exploring the contemporary art scene in Japan. | https://connect.ajet.net/2018/02/05/indigo-dyeing-studio-kosoen-japanese-textiles-just-outside-of-tokyo/ |
International Women’s Day (IWD) recognizes the social, economic, cultural, and political achievements of women. Annually celebrated on March 8, the day also marks a call to action for accelerating gender parity.
According to the IWD website, the theme for 2020 is #EachforEqual: “An equal world is an enabled world. How will you help forge a gender equal world? Celebrate women’s achievement. Raise awareness against bias. Take action for equality.”
JAPAN Forward would like to reintroduce you to women and girls who are champions in their fields, and who have inspired us with their stories.
EMPRESS MASAKO-SAMA
At a young age, Masako-sama lived in Moscow and New York with her parents, and then moved back to Japan when she was eight. Masako-sama is fluent in English, German, and French, and graduated from Harvard University with a B.A. magna cum laude in economics. After returning to Japan, Masako-sama studied law at the University of Tokyo and was among three women who passed the Japanese Ministry of Foreign Affairs entrance exam. Masako-sama reigns Japan as the Empress, and is the mother of one daughter, Aiko-sama.
Stories Related to Empress Masako-Sama:
- [Kimono Style] Jūnihitoe: Empress Masako’s Sumptuous Enthronement Dress
- ‘An Experience to Cherish’: Parade-Goers Celebrate Japan’s New Emperor, Empress
- WATCH | Tokyo Streets Turn Festive as Emperor Naruhito Greets Citizens in Post-Enthronement Parade
- WATCH | Thousands at People’s Parade Celebrate Japan’s New Emperor
- Imperial Trivia: The Empress’s Tiara
SHOKO KANAZAWA, Calligraphy Artist and Philanthropist
Shoko Kanazawa is the most famous calligrapher in Japan, and she may even be the most famous calligrapher in the world. Ironically, she may not even know this achievement. Kanazawa has written three Kanji of the Year with JAPAN Forward, and her second piece for 2019, “Prayer,” was donated to Kumamoto University, which suffered earthquakes and landslides in 2016.
Stories Related to Shoko Kanazawa:
- ‘Wa’ — Harmony — is Shoko Kanazawa’s 1st Reiwa New Year Kanji for JAPAN Forward Readers
- Shoko Kanazawa Donates 2019 New Year Calligraphy ‘Prayer’ to Disaster-stricken Kumamoto
- Shoko Kanazawa: A Down Syndrome Child’s Long Road to the ‘Light’
- ‘Prayer’ is Shoko Kanazawa’s 2019 New Year Kanji for JAPAN Forward Readers
- ‘Shine Brightly’: Calligrapher Shoko Kanazawa Writes New Year Kanji for JAPAN Forward Readers
YOSHIKO SAKURAI, Journalist and President of the Japan Institute for National Fundamentals
Yoshiko Sakurai is a pillar in Japan regarding progressive action, particularly in governmental affairs. A journalist by profession, Sakurai established the think tank Japan Institute for National Fundamentals in 2007 with the view of re-addressing fundamental issues that Japan faces.
Stories Related to Yoshiko Sakurai:
- Speak up Japan, the World is Listening
- [Speaking Out] Deploring Japan’s Limited Sense of Crisis
- [Speaking Out] Anti-Japan Tribalism Undermines Tokyo-Seoul Relations
- DIALOGUE | Prime Minister Shinzo Abe, Violinist Ryu Goto Find Common Resolve to End Abduction Issue
- Japan’s Nuclear Power Industry Will Collapse If Government Doesn’t Step In
- Japan’s Biggest Challenge: Urgently Amending the Constitution
- Why Do We Let Japanese Textbooks Carry Debunked Propaganda From China, South Korea?
HIYORI KON, Female Sumo Wrestler
The key protagonist in the Netflix short documentary, Little Miss Sumo, Hiyori Kon is pioneering a movement in sumo wrestling equality in Japan, where she hopes to establish a professional level for female sumo competition. Kon, originally from Aomori, graduates from Ritsumeikan University in the faculty of International Relations in the spring of 2020, and will be entering a company where she will be part of the sumo team. It’s the first time a woman will be accepted into the company team.
Stories Related to Hiyori Kon:
- INTERVIEW | Hiyori Kon, Matt Kay on ‘Little Miss Sumo’ and Whether Japan’s National Sport Is Ready for Women
- Hiyori Kon Meets Matt Kay: The Making of ‘Little Miss Sumo’
SHEILA CLIFFE, Kimono Researcher, Author, Stylist
Sheila Cliffe was born in Plymouth, England, in 1961, and relocated to Japan in 1985. She graduated from Suzunoya Kimono Gakuin and received a special award from Minzoku Ishou Bunka Fukyuu Kyoukai for her work in spreading kimono culture. Cliffe wears kimono regularly, and has earned a PhD in the study of kimono trends. She studied kimono fabric dyeing under Sassa Reiko, and teaches kimono culture and dressing.
Cliffe has spoken in Japan and in many other countries on kimono culture, and has published a book and articles in many journals. She has worked tirelessly in events in Japan and abroad to increase cultural understanding of Japan through spreading knowledge of kimono culture around the world.
Stories Related to Sheila Cliffe:
- Kanto Region Waiting to Be Rediscovered As Center of Kimono Production
- [Kimono Style] Jūnihitoe: Empress Masako’s Sumptuous Enthronement Dress
- [Kimono Style] The Secrets in Shinjuku
- [Kimono Style] Silk Weavers of Tango Peninsula Celebrate 300 Years of Their Craft
- Here’s How to Add Christmas Cheer to Your Kimono, According to Sheila Cliffe
MICHIKO YUSA, Professor of Japanese Thought and Intercultural Philosophy
Professor Yusa is highly regarded for her extensive, in-depth studies of philosophy and the Buddhist or Zen Buddhist thoughts of Kitaro Nishida (1870-1945) and Daisetz T. Suzuki (1870-1966). Yusa is also recognized for having introduced the two influential Kanazawa-associated figures, their ponderings and accomplishments, to a broad international audience.
Yusa is the winner of the 2nd Kanazawa University International Award in Commemoration of Daisetz T. Suzuki and Kitaro Nishida (KUI), and is recognized internationally for her prominent academic accomplishments in the study of the philosophies of Daisetz T. Suzuki and Kitaro Nishida, two of the most prominent philosophers and thinkers of modern Japan.
Story Related to Michiko Yusa:
HINAKO SHIBUNO, Japanese Professional Golfer, Reigning Women’s British Open Champion
Nicknamed the “Smiling Cinderella,” Hinako Shibuno won her first international tournament by taking the cup at the Women’s British Open at the age of 20. All the more astonishing to golf fans in Japan and overseas was the fact that it had only been a year since Shibuno qualified as a professional. Her infectious smile has also won her fans throughout Japan, and all of the world.
Stories Related to Hinako Shibuno:
- ‘Smiling Cinderella’ Hinako Shibuno wins AIG Women’s British Open
- Japan’s Champion Lady Athletes and their ‘Smile Power’
SUZUKO HIRANO, Actress, Model, and Hong Kong Supporter
Suzuko Hirano, 25, is a theater actress and kimono model living in Chiba prefecture. She has trained in Japan’s traditional arts of urasenke chado (Japanese tea ceremony), ikeno ikebana (Japanese flower arranging), garaku (Japanese imperial court music), and ryuteki (Japanese flute) performance. Hirano claims she was just a regular person, but on June 13, she felt moved to do something about the dire situation in Hong Kong, found herself taking part in a protest for Hong Kong. For several weeks, Hirano led more protests and gave encouraging speeches in Japan. “To everyone in the world: stand up and fight, for freedom, for the next generation.”
Story Related to Suzuko Hirano:
Tokiwamatsu Gakuen Dance Team
Inspired by Malala Yousafzai, the youngest recipient of the Nobel Peace Prize in 2014, Tokiwamatsu Gakuen dance team’s performance at the 12th All-Japan High School Super Cup Dance Stadium in 2019 got the attention of many in attendance. The dance team began with the words, “We are Malala!” Then they danced with all their heart and soul, depicting Malala’s indomitable spirit. Tokiwamatsu’s team took on the challenging theme of the importance of independent thought and the freedom to express and communicate, regardless of age or gender.
Story Related to Tokiwamatsu Gakuen Dance Team: | https://japan-forward.com/international-womens-day-2020-japan-forward-reading-list/ |
Kimono SleeveSeptember 9th, 2014
A kimono sleeve is a boxy sleeve that is sewn in one piece with the bodice of the same piece of fabric. The kimono sleeve originated in the traditional Japanese dress – the kimono. The kimono sleeve reached its height in popularity during World War II when Asian fashion had become widespread in Europe. Today, kimono sleeves are not as common as they used to be but are still used in fashion. | http://www.speak-fashion.de/tag/kimono-sleeve |
Japanese is the official and primary language of Japan. Japanese has a lexically distinct pitch-accent system. Early Japanese is known largely on the basis of its state in the 8th century, when the three major works of Old Japanese were compiled. The earliest attestation of the Japanese language is in a Chinese document from 252 AD.
Japanese is written with a combination of three scripts: hiragana, derived from the Chinese cursive script, katakana, derived as a shorthand from Chinese characters, and kanji, imported from China. The Latin alphabet, rōmaji, is also often used in modern Japanese, especially for company names and logos, advertising, and when inputting Japanese into a computer. The Hindu-Arabic numerals are generally used for numbers, but traditional Sino-Japanese numerals are also very common.
Religion
Shintoism and Buddhism are the primary religions of Japan, though a secular Christmas is widespread, and minority Christian and Islamic communities exist.
Shintoism
Shintoism is an ethnic religion that focuses on ceremonies and rituals. In Shintoism, followers believe that kami, a Shinto deity or spirit, are present throughout nature, including rocks, trees, and mountains. Humans can also be considered to possess a kami. One of the goals of Shintoism is to maintain a connection between humans, nature, and kami. The religion developed in Japan prior to the sixth century CE, after which point followers built shrines to worship kami.
Buddhism
Buddhism developed in India around the 6th and 4th centuries BCE and eventually spread through China and Korea. It arrived in Japan during the 6th century CE, where it was initially unpopular. Most Japanese people were unable to understand the difficult philosophical messages present in Buddhism, however they did have an appreciation for the religion's art, which is believed to have led to the religion growing more popular. Buddhism is concerned with the soul and life after dying. In the religion a person's status was unimportant, as every person would get sick, age, die, and eventually be reincarnated into a new life, a cycle called saṃsāra. The suffering people experienced during life was one way for people to gain a better future. The ultimate goal was to escape the cycle of death and rebirth by attaining true insight.
National character
The Japanese "national character" has been written about under the term Nihonjinron, literally meaning "theories/discussions about the Japanese people" and referring to texts on matters that are normally the concerns of sociology, psychology, history, linguistics, and philosophy, but emphasizing the authors' assumptions or perceptions of Japanese exceptionalism; these are predominantly written in Japan by Japanese people, though noted examples have also been written by foreign residents, journalists and even scholars.
Literature
Early works of Japanese literature were heavily influenced by cultural contact with China and Chinese literature, often written in Classical Chinese. Eventually, Japanese literature developed into a separate style in its own right as Japanese writers began writing their own works about Japan. Since Japan reopened its ports to Western trading and diplomacy in the 19th century, Western and Eastern literature have strongly affected each other and continue to do so.
Visual arts
Japanese calligraphy
The flowing, brush-drawn Japanese rendering of text itself is seen as a traditional art form as well as a means of conveying written information. The written work can consist of phrases, poems, stories, or even single characters. The style and format of the writing can mimic the subject matter, even to the point of texture and stroke speed. In some cases, it can take over one hundred attempts to produce the desired effect of a single character but the process of creating the work is considered as much an art as the end product itself.This calligraphy form is known as 'shodō' (書道) which literally means 'the way of writing or calligraphy' or more commonly known as 'shūji' (習字) 'learning how to write characters'. Commonly confused with calligraphy is the art form known as 'sumi-e' (墨絵), literally meaning 'ink painting', which is the art of painting a scene or object.
Japanese painting
Painting has been an art in Japan for a very long time: the brush is a traditional writing and painting tool, and the extension of that to its use as an artist's tool was probably natural. Japanese painters are often categorized by what they painted, as most of them constrained themselves solely to subjects such as animals, landscapes, or figures. Chinese papermaking was introduced to Japan around the 7th century. Later, washi was developed from it. Native Japanese painting techniques are still in use today, as well as techniques adopted from continental Asia and from the West. Schools of painting such as the Kano school of the 16th century became known for their bold brush strokes and contrast between light and dark, especially after Oda Nobunaga and Tokugawa Ieyasu began to use this style. Famous Japanese painters include Kanō Sanraku, Maruyama Ōkyo, and Tani Bunchō.
Ukiyo-e
Ukiyo-e, literally "pictures of the floating world", is a genre of woodblock prints that exemplifies the characteristics of pre-Meiji Japanese art. Because these prints could be mass-produced, they were available to a wide cross-section of the Japanese populace—those not wealthy enough to afford original paintings—during their heyday, from the 17th to 20th century.
Ikebana
Ikebana (生け花, 活花, or 挿花) is the Japanese art of flower arrangement. It has gained widespread international fame for its focus on harmony, color use, rhythm, and elegantly simple design. It is an art centered greatly on expressing the seasons, and is meant to act as a symbol to something greater than the flower itself.
Traditional clothing
Traditional Japanese clothing distinguishes Japan from all other countries around the world. The Japanese word kimono means "something one wears" and they are the traditional garments of Japan. Originally, the word kimono was used for all types of clothing, but eventually, it came to refer specifically to the full-length garment also known as the naga-gi, meaning "long-wear", that is still worn today on special occasions by women, men, and children. The earliest kimonos were heavily influenced by traditional Han Chinese clothing, known today as hanfu (漢服, kanfuku in Japanese), through Japanese embassies to China which resulted in extensive Chinese culture adoptions by Japan, as early as the 5th century AD. It was during the 8th century, however, that Chinese fashions came into style among the Japanese, and the overlapping collar became particularly women's fashion. Kimono in this meaning plus all other items of traditional Japanese clothing is known collectively as wafuku which means "Japanese clothes" as opposed to yofuku (Western-style clothing). Kimonos come in a variety of colors, styles, and sizes. Men mainly wear darker or more muted colors, while women tend to wear brighter colors and pastels, and, especially for younger women, often with complicated abstract or floral patterns.
The kimono of a woman who is married (tomesode) differs from the kimono of a woman who is not married (furisode). The tomesode sets itself apart because the patterns do not go above the waistline. The furisode can be recognized by its extremely long sleeves spanning anywhere from 39 to 42 inches, it is also the most formal kimono an unwed woman wears. The furisode advertises that a woman is not only of age but also single.The style of kimono also changes with the season, in spring kimonos are vibrantly colored with springtime flowers embroidered on them. In Autumn, kimono colors are not as bright, with Autumn patterns. Flannel kimonos are most commonly worn in winter; they are made of a heavier material and are worn mainly to stay warm.One of the more elegant kimonos is the uchikake, a long silk overgarment worn by the bride in a wedding ceremony. The uchikake is commonly embellished with birds or flowers using silver and gold thread.Kimonos do not come in specific sizes as most western dresses do. The sizes are only approximate, and a special technique is used to fit the dress appropriately.
The obi is a very important part of the kimono. Obi is a decorative sash that is worn by Japanese men and women, although it can be worn with many different traditional outfits, it is most commonly worn with the kimono. Most women wear a very large elaborate obi, while men typically don a more thin and conservative obi.Most Japanese men only wear the kimono at home or in a very laid back environment, however it is acceptable for a man to wear the kimono when he is entertaining guests in his home. For a more formal event a Japanese man might wear the haori and hakama, a half coat and divided skirt. The hakama is tied at the waist, over the kimono and ends near the ankle. Hakama were initially intended for men only, but today it is acceptable for women to wear them as well. Hakama can be worn with types of kimono, excluding the summer version, yukata. The lighter and simpler casual-wear version of kimono often worn in Japanese summer festival is called yukata.Formal kimonos are typically worn in several layers, with number of layers, visibility of layers, sleeve length, and choice of pattern dictated by social status, season, and the occasion for which the kimono is worn. Because of the mass availability, most Japanese people wear western style clothing in their everyday life, and kimonos are mostly worn for festivals, and special events. As a result, most young women in Japan are not able to put the kimono on themselves. Many older women offer classes to teach these young women how to don the traditional clothing.
Happi is another type of traditional clothing, but it is not famous worldwide like the kimono. A happi (or happy coat) is a straight sleeved coat that is typically imprinted with the family crest, and was a common coat for firefighters to wear.Japan also has very distinct footwear.Tabi, an ankle high sock, is often worn with the kimono. Tabi are designed to be worn with geta, a type of thonged footwear. Geta are sandals mounted on wooden blocks held to the foot by a piece of fabric that slides between the toes. Geta are worn both by men and women with the kimono or yukata.
Installation arts
Architecture
Japanese architecture has as long of a history as any other aspect of Japanese culture. Originally heavily influenced by Chinese architecture, it has developed many differences and aspects which are indigenous to Japan. Examples of traditional architecture are seen at temples, Shinto shrines, and castles in Kyoto and Nara. Some of these buildings are constructed with traditional gardens, which are influenced from Zen ideas.Some modern architects, such as Yoshio Taniguchi and Tadao Ando are known for their amalgamation of Japanese traditional and Western architectural influences.
Gardens
Garden architecture is as important as building architecture and very much influenced by the same historical and religious background. A primary design principle of a garden is the creation of the landscape based on, or at least greatly influenced by, the three-dimensional monochrome ink (sumi) landscape painting, sumi-e or suibokuga.In Japan, the garden has the status of artwork.
Sculpture
Traditional Japanese sculptures mainly focused on Buddhist images, such as Tathagata, Bodhisattva, and Myō-ō. The oldest sculpture in Japan is a wooden statue of Amitābha at the Zenkō-ji temple. In the Nara period, Buddhist statues were made by the national government to boost its prestige. These examples are seen in present-day Nara and Kyoto, most notably a colossal bronze statue of the Buddha Vairocana in the Tōdai-ji temple.
Wood has traditionally been used as the chief material in Japan, along with traditional Japanese architecture. Statues are often lacquered, gilded, or brightly painted, although there are little traces on the surfaces. Bronze and other metals are not used. Other materials, such as stone and pottery, have had extremely important roles in the plebeian beliefs.
Music
The music of Japan includes a wide array of performers in distinct styles both traditional and modern. The word for music in Japanese is 音楽 (ongaku), combining the kanji 音 "on" (sound) with the kanji 楽 "gaku" (enjoyment). Japan is the second largest music market in the world, behind the United States, and the largest in Asia, and most of the market is dominated by Japanese artists.
Local music often appears at karaoke venues, which is on lease from the record labels. Traditional Japanese music is quite different from Western Music and is based on the intervals of human breathing rather than mathematical timing. In 1873, a British traveler claimed that Japanese music, "exasperate(s) beyond all endurance the European breast."
Performing arts
The four traditional theatres from Japan are noh (or nō), kyōgen, kabuki, and bunraku. Noh had its origins in the union of the sarugaku, with music and dance made by Kanami and Zeami Motokiyo. Among the characteristic aspects of it are the masks, costumes, and the stylized gestures, sometimes accompanied by a fan that can represent other objects. The noh programs are presented in alternation with the ones of kyōgen, traditionally in number of five, but currently in groups of three.
The kyōgen, of humorous character, had older origin, in 8th century entertainment brought from China, developing itself in sarugaku. In kyōgen, masks are rarely used and even if the plays can be associated with the ones of noh, currently many are not.
Kabuki appears in the beginning of the Edo period from the representations and dances of Izumo no Okuni in Kyoto. Due to prostitution of actresses of kabuki, the participation of women in the plays was forbidden by the government in 1629, and the feminine characters had passed to be represented only by men (onnagata). Recent attempts to reintroduce actresses in kabuki had not been well accepted. Another characteristic of kabuki is the use of makeup for the actors in historical plays (kumadori).
Japanese puppet theater bunraku developed in the same period, that kabuki in a competition and contribution relation involving actors and authors. The origin of bunraku, however is older, lies back in the Heian period. In 1914, appeared the Takarazuka Revue a company solely composed by women who introduced the revue in Japan.
Sports and leisure
In the long feudal period governed by the samurai class, some methods that were used to train warriors were developed into well-ordered martial arts, in modern times referred to collectively as koryū. Examples include kenjutsu, kendo, kyūdō, sōjutsu, jujutsu, and sumo, all of which were established in the Edo period. After the rapid social change in the Meiji Restoration, some martial arts changed into modern sports, called gendai budō. Judo was developed by Kanō Jigorō, who studied some sects of jujutsu. These sports are still widely practiced in present-day Japan and other countries.Baseball, Association football, and other popular western sports were imported to Japan in the Meiji period. These sports are commonly practiced in schools, along with traditional martial arts.Baseball, soccer, football, and ping pong are the most popular sports in Japan. Association football gained prominence in Japan after the J League (Japan Professional Football League) was established in 1991. Japan also co-hosted the 2002 FIFA World Cup. In addition, there are many semi-professional organizations, which are sponsored by private companies: for example, volleyball, basketball, rugby union, table tennis, and so on.
Cuisine
Through a long culinary past, the Japanese have developed sophisticated and refined cuisine. In more recent years, Japanese food has become fashionable and popular in the United States, Europe, and many other areas. Dishes such as sushi, tempura, noodles, and teriyaki are some of the foods that are commonly known. The Japanese diet consists principally of rice; fresh, lean seafood; and pickled or boiled vegetables. The healthy Japanese diet is often believed to be related to the longevity of Japanese people.
Popular culture
Japanese popular culture not only reflects the attitudes and concerns of the present day, but also provides a link to the past. Popular films, television programs, manga, music, anime and video games all developed from older artistic and literary traditions, and many of their themes and styles of presentation can be traced to traditional art forms. Contemporary forms of popular culture, much like the traditional forms, provide not only entertainment but also an escape for the contemporary Japanese from the problems of an industrial world.
When asked how they spent their leisure time, 80 percent of a sample of men and women surveyed by the government in 1986 said they averaged about two and a half hours per weekday watching television, listening to the radio, and reading newspapers or magazines. Some 16 percent spent an average of two and a quarter hours a day engaged in hobbies or amusements. Others spent leisure time participating in sports, socializing, and personal study. Teenagers and retired people reported more time spent on all of these activities than did other groups.
Many anime and manga are very popular around the world and continue to become popular, as well as Japanese video games, fashion, and game shows.
In the late 1980s, the family was the focus of leisure activities, such as excursions to parks or shopping districts. Although Japan is often thought of as a hard-working society with little time for leisure, the Japanese seek entertainment wherever they can. It is common to see Japanese commuters riding the train to work, enjoying their favorite manga, or listening through earphones to the latest in popular music on portable music players. A wide variety of types of popular entertainment are available. There is a large selection of music, films, and the products of a huge comic book industry, among other forms of entertainment, from which to choose. Game centers, bowling alleys, and karaoke are popular hangout places for teens while older people may play shogi or go in specialized parlors. Together, the publishing, film/video, music/audio, and game industries in Japan make up the growing Japanese content industry.
Gallery
See also
- Cool Japan
- History of Japan
- Imperial House of Japan
- Tourism in Japan
- Japanese Language
- Etiquette in Japan
- Japanese cuisine
- Japanese Aesthetics
- Japanese music
- Science and technology in Japan
- Japanese martial arts
- Yamato damashii
- Religion in Japan
Books on Japanese culture:
References
- Cwiertka, Katarzyna J. (2007). Modern Japanese Cuisine: Food, Power and National Identity. Reaktion Books. ISBN 978-1-86189-298-0. Review
- Japan This article incorporates public domain material from the Library of Congress Country Studies website http://lcweb2.loc.gov/frd/cs/.
- Goldstein-Gidoni, Ofra (Fall 1999). "Kimono And The Construction of Gendered and Cultural Identities". Ethnology. 38 (4): 351–370. JSTOR 3773912.
- Martin, Richard (1995). "Our Kimono Mind: Reflections on 'Japanese Design: A Survey since 1950'". Journal of Design History. 8 (3): 215–223.
- Nakagawa, Keiichirō; Rosovsky, Henry (Spring–Summer 1963). "The Case of the Dying Kimono: The Influence of Changing Fashions on the Development of the Japanese Woolen Industry". The Business History Review. 37 (1/2): 59–80. doi:10.2307/3112093. JSTOR 3112093.
- Varley, Paul. Japanese Culture. 4th Edition. Honolulu. 2000.
- Nippon The Land And Its People. 2006.
Other Languages
Copyright
- This page is based on the Wikipedia article Culture of Japan; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA. | https://thereaderwiki.com/en/Culture_of_Japan |
In Japan, paintings that capture the beautiful appearance, or internal beauty, of women are known by a specific term, bijinga. As an artist whose works are mainly figurative, with women a particularly common motif, Ayana Otake has built substantial popularity among fans of the bijinga genre. Each stroke of her paintings is given the utmost care, weaving together lines and pigments into images of beautiful women with alluring charm. The kimono worn by the women in her paintings are an expression of Japanese notions of beauty and spirituality, granting her subjects an air that is dignified yet elevated from the every day. | https://www.seizan-gallery.com/artist-ayana-otake |
introduction.
Japan experienced the popularity of these early Rock n’ Roll styles
as did much of the world at that time, but it was the revival in
the late 70’s that brought the fashions still associated with the
Roller-zoku. Japanese bands like the Cools and Carol were at the
forefront of this musical revival and began associating themselves
with leather jackets, greased back hair and motorcycles.
Unlike many other fashion tribes these Greasers are often all ages
from the young to the old. An interesting aspect of this tribe is
some members predilection for dancing, which can be seen being
practiced in Tokyo parks on weekends. Much like early Hip Hop was
associated with breakdancing, Roller-zoku have their own brand of
dancing, incorporating classic rock n’ roll dancing as well as
intricate footwork, acrobatics, and theatricality.
This portrait series was created in Tokyo over 5 weeks in 2013 and
2015, shooting at parks, parties, bars and music venues. | https://www.lensculture.com/projects/153185-rockers-tokyo-rollerzoku-gangs |
Writer: Hiro Ariga
Hi I am Hiro!
Originally born in Kagoshima but brought up in Sydney, Australia.
I am currently back living in Japan with my husband and my little baby.
I am an internationally licensed kimono teacher.
I love to travel around Japan and have been fortunate enough to encounter many of the beautiful aspect of the country during the course.
Although I am still in the midst of my adventure, I would like to share with you some of my findings here! I hope that the beauty of Japan can be penetrated from my articles even if its to a small extent!
Thank you very much for reading! | https://taiken.co/writer/hiro-ariga/ |
If there is one single piece of clothing that is associated with Japan, it is the kimono (literally translated as "the thing worn.") The kimono and its variations has a very long history in Japan and is still worn today, although generally only for celebrations and wedding ceremonies. The history of the development of the kimono is closely tied to the development of Japanese textiles and techniques for making clothing.
The kimono emphasizes the material from which it is made. Assembled from simple rectangular cut pieces of cloth, the shape of the body is both concealed and virtually ignored by the structure of the garment. Western clothing is constructed of material that is cut precisely to conform to and accentuate the shape of the body, thus eliminating the space between body and garment.
The kimono is made by sewing together simple rectangles cut in straight lines from relatively narrow widths of hand woven silks or hemp tailoring, such as that found in western style garments is not used. This makes it easy to take the kimono apart, replace worn areas, and put it back together again.
The Joman, Yayoi and Tumulus Periods
The Asuka era
The Nara era
The Heian era
The Kamakura era
The Muromachi era
The Momoyama era
The Edo period
The Meiji era
Later eras
Making the kimono
There were different ways of decorating the kimono. Among these are:
Immersion dyeing: This is used when you want the entire cloth to be of the same color. Once the dye is ready the cloth is immersed in it over and over until you get the exact color that you want. Once that happens the cloth is removed and dried. Sometimes certain materials are used to make sure the dye penetrates the cloth properly.
Painting: You outline a pattern, then use dyes and mordents to brush in the pattern and then use freehand painting to finish it.
Paste resist: A drawing of the design is made on the fabric, then past is applied to the area to be reserved and then the dyeing is done.
Shibori: Sometimes there are areas of the cloth that you don't want dyed at first, so it's necessary to somehow block off those areas from the dye when you initially immerse it in the dye bath.
Yuzen dyeing: The silk is stretched straight then a drawing is painted onto the white silk with a water-soluble blue liquid. The drawing is traced over with narrow lines of resist paste. The name comes from Miyazuki Yuzensai, a fan painter of the seventeenth century.
Kimono colors
It is obvious that on many of the kimonos color is a very important aspect. Through the Heian period colors were considered to have a spiritual force. Many of the plants used for dyes for the kimonos also had medicinal uses, adding even more importance to the colors used.
Color names include murasaki (purple); akane (red); ai (indigo); benibana (scarlet) and cha, or brown.
Black, which was associated with wisdom, was believed to be protection against evil. Purple represented elegance and was connected with the highest ranks of people. Browns and greys were the traditional colors for commoners.
Storing the Kimono
Since kimonos are constructed so that they can be taken apart and put back together there have been times when a kimono that had faded was taken apart, re-dyed and then put back together again although this is not really done much anymore.
Kimonos are also never dry cleaned. You also never hang the kimono on a hanger or seal it in plastic.
In order to clean you first shake off any dust and then vacuum the surfaces to remove any other dust. You then refold the kimono and put it away.
Properly storing a kimono requires wrapping it in rice paper called tatoushi then laying the kimono flat in a drawer. Ideally this would be something called a kiri tansu which is a chest made of lightweight wood that repels moisture.
More information on kimonos can be found in these sections: | http://www.bookmice.net/darkchilde/japan/jkimono.html |
Bangkok Study in Japan Fair 2015
On 13 December Osaka City University participated in the Study in Japan Fair 2015, organized by the Japan Student Services Organization (JASSO) in Bangkok, Thailand. A total of 66 institutes participated, including 21 national universities, 3 public universities and 28 private universities. The fair attracted 1,961 visitors.
OCU participated with a booth, where interested students could obtain information about OCU faculties, courses, facilities, admission process as well as get individual advice, with OCU staff able to talk with students in Thai, English and Japanese. There was a small but steady flow of students visiting our booth, with an especially high number interested in the Faculty and Graduate School of Business.
Simultaneously, JASSO was offering seminars and lectures throughout the day explaining students how to prepare for study in Japan. There were also various demonstrations such as kendo, kimono wearing and origami.
Japan and Japanese culture is enjoying a high degree of popularity in Thailand. We hope we have gained new interest in Thailand for students to come and study at OCU. | https://www.osaka-cu.ac.jp/en/news/2015/bangkok-study-in-japan-fair-2015 |
Weddings in Japan often have many twists that may come as a shock to many foreigners due to the different cultural beliefs. When attending or having a wedding in Japan, here are a few things that you should know about the uniqueness of Japanese weddings.
Table of Contents
Marriage is considered as one of a person's biggest milestones in life. It is a celebration and union of two halves of one whole - or two people who wish to spend the rest of their lives together and become a family. Wedding ceremonies are not something to be taken lightly, and marriage in Japan is no exception. The diverse and religious backgrounds of the Japanese bring color and fanfare to wedding ceremonies. Because Japan is a melting pot for different religious and cultural backgrounds, one can expect a variety of different wedding practices in Japan. The majority of traditional wedding ceremonies have Buddhism, Shintoism, and Christian influences.
Weddings in Japan may also be traditional or modern, depending on the couple's upbringing and influence. Traditional wedding ceremonies in Japan are very similar to Shinto-types of weddings where the couple would get married inside a temple. On the other hand, the modern concept of wedding ceremonies in Japan is reflective of the western concept of marriage (think frilly white bridal dresses in a beautiful garden and flower decorations everywhere).
Another interesting information is that in marriage in Japan, weddings are actually optional. There have been cases in the past where the paper signing of marriage documents is the actual ceremony itself. This is because the signing of the wedding contract would legally bind the couple together, and for some Japanese couples, that is enough.
No matter the way of being wed not just in Japan but practically anywhere else in the world, one thing is for sure, that marriage is a special occasion for any couple as it binds the two together into a lifelong commitment. A wedding, on the other hand, is a ceremony that would celebrate or commemorate the couple's binding into the lifelong commitment that is marriage.
Since Japan offers diverse ways of wedding traditions and ceremonies because of religious and cultural influences, let’s take a look at some of the more common traditional Japanese wedding ceremonies.
The Shinto Style Wedding Ceremony is one of the most common traditional weddings in Japan. The venue is usually located on the main floor of a shrine, and a priest officiates the ceremony. These types of traditional Japanese ceremonies are usually very private and only attended by the bride and grooms’ most intimate and closest family members and friends. The Shinto style of the Japanese wedding ceremony starts with the priest performing a purification ritual for the couple and asks the gods and goddesses of the shrine for the blessing, including watching over the couple to have a fruitful marriage. The ceremony ends when the couple takes three sips of sake from the cup. This portion of the ritual is called sansanduko.
The traditional wedding outfit for the bride is either a pure white kimono to symbolize the bride’s purity, a colorful ensemble, or a kimono with black and white patterned kimono, which was the usual wedding outfit for Japanese noblemen in the past. On the other hand, the groom typically wears a traditional outfit called a hakama, which is worn over a black kimono paired with a black jacket called haori. .
The Buddhist Style of wedding ceremony in Japan is a mix between the Shinto and Christian-style wedding ceremonies. The main difference is that the venue is held inside a temple instead of a church. The entrance of the couple is accompanied by beautiful string instruments reflective of the Buddhist faith. The wedding activities for a Buddhist style wedding ceremony is similar to the Christian style in the sense that there is an exchange of rings and praying together. However, the prayer consists of mantras and sutras.
Additionally, the couple’s parents ‘bless’ their children through lighting incense. The wedding ceremony would then be commemorated through the coupe sipping sake from a cup. The bride can also wear either a kimono or a modern western-style wedding dress, but most wear a kimono to the Buddhist style ceremony and may change for the reception after. .
The Christian style wedding in Japan is the most common and preferred choice since the 1990s amongst the different Japanese wedding ceremonies. According to the annual survey by Zexy, a large bridal company in Japan, 51% weddings in 2019 in the country were Christian-style weddings, reaching up to 56% in the Tokyo metropolitan area.
※Zexy, Zexy Wedding Trend Survey 2019, pg 112
The flow of the wedding itself is strikingly similar to the western version of wedding ceremonies just held in Japan. It follows the protestant liturgy doctrine with prayers, hymns, bible readings, and exchanging of wedding vows; however, these are often done regardless of religious belief whereas overseas, non-Christian couples may skip the religious aspects of a church wedding. Common activities also include the bride's father escorting her towards her husband as a sign of his blessing and the bride's mother lowering her daughter's veil as a sign of her blessing.
Although there are many similarities between traditional Japanese weddings and other western types of weddings, some of the differences would be reflected in the wedding reception. It is very common in western types of weddings to have a lot of dancing and singing during the wedding reception. Japanese weddings would usually consist of big speeches from close relatives, friends, and colleagues wishing the couple good luck on their journey as husband and wife, and there’s usually a strictly planned agenda.
Lastly, at Japanese weddings, you can only be invited to the actual wedding ceremony if you are a close family member or an intimate friend. In contrast, western types of wedding traditions would usually involve a larger group of people. Otherwise, you will just be invited to the wedding reception afterward, called a hirouen. However, don’t be mistaken that this is a mere party; it can be quite formal. You may not be able to approach the wedded couple freely - usually there is a photo opportunity for each table or group to take photos together and chat briefly with the happy couple.
It is important to note the following etiquette when attending wedding ceremonies in Japan:
The invitation is just for the person to whom it was addressed. This means that only that person is invited and bringing of a plus one or a date is usually not to be expected unless specifically noted on the invitation.
In Japan, gifts in cash are much appreciated and expected as compared to gift items. This also makes it easier for the guests as they would not have to think about what the couple likes as a wedding present. However, be aware that there’s rules about the amount. It’s usually 30,000 yen for a single person, and 50,000 yen for couples, and is presented in nice envelopes called shugi-bukuro. The amount may seem like a lot, but weddings are very expensive affairs that guests essentially help pay for.
Guests are informed ahead of time of the dress code, which is usually of formal attire where women are encouraged to wear either a kimono (to the Shinto or Buddhist weddings) or nice dresses (no shoulders showing and not too short) and men are encouraged to wear suit and tie, unless otherwise specified.
Like anywhere else, punctuality is expected for all guests as a sign of respect for the couple. There is also assigned seating.
Guests receive nice gifts at the end, which can range from food to nice household items.
To be married is a divine process as it symbolizes the union of two individuals who love and care for each other deeply and truly. Weddings in Japan are a truly unique experience, whether a traditional or modern setup. So if you have the opportunity to attend a Japanese wedding or even get married in Japan, it will surely be a romantic experience to remember as Japanese wedding ceremonies are truly reflective of the diverse cultural and traditional beliefs that make Japan unique.
To read about how to get married in Japan, please read our article here!
Guide to Securing Marriage Documents in Japan for Foreigners
Understanding Japanese Futons, Standard of Quality Rest
At the end of all adventures, the wrapping up must always be a nice, long, and good sleep. Many Japanese households have futons, and the same stands true for Ryokans in Japan. Though they serve the sa...
Cost and Other Essential Information about Manga in Japan
The Japanese style of comic art and comic books, manga, is famous not only in Japan but across the world. The art evolved throughout the years, but its popularity only grew. Fans who visit Japan often...
Get Involved in Government-supported Cultural Exchange Programs in Japan
For better cooperation and understanding, interactions are the key. Experience the rich heritage of Japan through exchange programs that promote international diplomacy and are supported by the govern...
Important Things to Keep in Mind When Attending a Funeral in Japan
Death and funerals are part of the cycle of life, and at times, we must accompany our friends at their darkest moments. Funerals in Japan are a very formal event, and because of the differing religiou...
Things that you must know to avoid culture shock before living in Japan
Japan has a rich, deep cultural background with different beliefs and traditions that may come as both shocking and intriguing for foreigners. Understanding the differences can help you better appreci... | https://we-xpats.com/en/guide/as/jp/detail/4331/ |
Japan has a fascinating and multifaceted culture. It is a culture that is steeped in traditions dating back thousands of years and at the same time in a constant state of rapid flux, with continually shifting fads, fashions, and technological developments. Yet, at times it seems that the more Japan evolves, the more it remains the same. Many of Japan’s traditional arts embody these characteristics as well and reflect the mutable society we have so often observed.
Telling a story through the ages
Rakugo, Japan’s traditional art of storytelling is no exception to this phenomenon. Rakugo as we know it today dates back to the Edo period (1603-1867), when the art form gained a foothold in Edo (Tokyo), Osaka, and Kyoto, with each city developing its own distinctive style. It became a popular form of entertainment after the establishment of the first vaudeville-type urban theaters known as yose in 1798.
These theatres provided entertainment for ordinary people and at the height of their popularity there were 175 yose operating in Edo alone. Today there are only four yose still in existence in Tokyo (the Suzumoto Engeijo in Ueno, the Shinjuku Suehirotei in Shinjuku, the Asakusa Engei Hall in Asakusa, and the Ikebukuro Engeijo in Ikebukuro), one in Osaka (the Temma Tenjin Hanjo Tei), and one each in Kobe (the Kobe Shinkaichi Kirakukan), Nagoya (the Osu Engeijo), and Sendai (Hanaza).
Rakugo’s origins can be traced back to the 17th century, to the humorous anecdotes that were used during long Buddhist sermons as an effective way to keep people awake and alert. In 1623, through the urging of Kyoto governor Itakura Shigemune, a Buddhist monk named Anrakuan Sakuden (1554-1642), compiled over 1,000 of these anecdotes in a work called “Seisuisho” (“Laughs to Wake You Up”).
The components of a rakugo performance
Today, Sakuden is considered to be the father of rakugo. The presentation and style of rakugo performances have remained unchanged since the late 18th century. A rakugo performance is rather minimalistic and features a lone storyteller (rakugoka) dressed in a kimono, kneeling in the seiza position on a floor cushion (zabuton) that is placed on an elevated stage or platform (koza).
The performer relies solely on a paper fan (sensu) and a small hand towel (tenugui) as props to help him convey the story to the audience. The stories are based on a wide range of topics, from comical to sentimental, and involve conversations between multiple characters. The storyteller switches fluidly and seamlessly from one character to another, changing his voice, facial expression, mannerisms, and accent to fit the character who is speaking. A slight turn of the head and a change in pitch is used to indicate a switch from one character to another.
Although rakugo performances follow the stylised conventions established long ago, the storyteller’s freedom to improvise and incorporate modern vernacular and references to recent events has contributed to the popularity of the artform. New stories are constantly being created and added to the traditional repertoire of over 300 classic stories.When rakugo was first performed in Japan, it was intended to be performed by a Japanese performer for a Japanese audience.
However, in the early 1890s, Australian-born Henry Black, who is better known by his stage name, Kairakutei Black, began performing rakugo in Japan to a Japanese audience. He was Japan’s first foreign-born rakugo performer. His audience was surprised to see a foreigner who not only had learned Japanese but had become fluent enough to make them laugh.
In the late 20th century, there was another notable “first” in rakugo history; a Japanese-born rakugo performer who performed in English. His stage name was Katsura Shijaku II. He made his debut in 1962 and went on to become the first rakugo storyteller to perform in English and to take the art overseas to foreign audiences. Shijaku started studying English as a hobby in the early 1980s and initially began translating rakugo stories to improve his English skills. He gave his first English-language rakugo performance in 1983.
Throughout his career, he toured various English-speaking nations including Australia, New Zealand, Canada, the United Kingdom, and the United States. His repertoire included approximately 60 Japanese stories and 15 English stories, some of which he wrote himself. In recent times, Japan has seen a rise in rakugo performed in English.
Performing rakugo outside of Japan
Although rakugo is no longer limited to Japanese speaking audiences, it is less known outside of Japan compared to other forms of traditional Japanese theater. However, rakugo performers like Katsura Sunshine and Kanariya Eiraku, the founder of the Canary English Rakugo Company, are working hard to change that.Sunshine is a Canadian-born rakugo storyteller who gives performances both in Japan and abroad.
He brought his rakugo act to New York’s SoHo Playhouse in November 2017 and continues to be a popular player in the world of rakugo.The Canary English Rakugo Company was established in 2007 and currently has more than 50 members. They hold regular recitals in Tokyo and began touring overseas in 2015.
Over the years, they have given performances to enthusiastic audiences in the United States, the United Kingdom, Georgia, Kazakhstan, Denmark, Laos, New Zealand, and Australia. Eiraku, born in Aichi prefecture, has been studying and performing rakugo for over thirty years.
After earning his bachelor’s degree in English at Sophia University, Faculty of Foreign Studies, in 1981, he studied rakugo for a few years at Tatekawa-ryu, founded by Tatekawa Danshi (1936-2011). He established his Japanese rakugo classes in 1991 and his English rakugo classes in 2007. In 2020, he founded the English Rakugo Association in Tokyo. One of the association’s goals is to spread English rakugo all over the world and have it become as popular as sushi, manga, and sumo in Western culture.
Bio: Kristine Ohkubo is a Los Angeles-based author who believes that writing from other cultural perspectives encourages empathy and understanding, and at the same time broadens our knowledge of the events that have unfolded over the years. She developed a deep love and appreciation of Japanese culture, people, and history early in life. Her frequent travels to Japan have enabled her to gain deep insight into this fascinating culture, which she shares with you through her work. | https://yamatomagazine.home.blog/2021/02/02/guest-post-rakugo-ever-evolving-yet-never-changing/ |
I’ve had little time to keep up with my blogging, but at least I have finally been able to begin posting my lovely little collection of haori jackets for sale in my kimono shop. The first five were just listed, so do click the “Buy Vintage Kimono” on the menu above to drop by to take a look. They would make wonderful holiday gifts for yourself or anyone else. I’ll try to add a few more every couple of days until I’ve gotten my entire stock listed.
Monthly Archives: October 2008
Gagaku by moonlight
Under the gorgeous full moon a few nights ago, my friend Judith Clancy and I attended a gagaku concert in the garden surrounding Shimogamo shrine. Areas of the garden were lit with flood-lamps, which guided our way as we walked through the large gardens toward the stage outside one of the main buildings of the shrine. Though the autumn nights are definitely getting cooler, it was still lovely to join the audience clustered before the open air stage. Many in the audience had also attended Tea Ceremony before the concert and were still dressed in kimono.
Gagaku is the oldest form of classical music in Japan, having been brought to Japan from China in the 7th century A.D., a time when Japan was busily assimilating extensive amounts of Chinese cultural and political practices. The word gagaku translates as “elegant music” and is played by an ensemble of 3 percussion instruments and 3 wind instruments. As the tradition developed in Japan, gagaku was performed by hereditary guilds of musicians and even today many members of the Imperial Palace Music Department are descendants of the old guilds.
Gagaku is often accompanied by a classical dance called bugaku. And of course, the spectacular costumes worn by both the muscians and dancers, as well as the draperies gracing the stage, added a visual treat to the evening’s entertainment. | http://www.wabei-mono.com/blog/2008/10/ |
During my time in Japan so far I have experienced two tea ceremonies.
The first was with a few of my students and, from what I could tell from the laughing and teasing between the students, it was very informal.
The second was a class in Hiroshima for just me and my parents during their visit from the UK. This was more formal as we were dressed in kimono and guided through the etiquette by our wonderful host, Yuki, an experienced Chado instructor. We were taught the fascinating history of the tea ceremonies and the reasons for the traditions.
Me and my family are obsessed with tea. Back in London we have several drawers filled with different types (I like spicy teas but my sisters prefer more sweet or bitter flavours). Throughout my year in Japan I have amassed my own collection which I will increase to get me through the cold winter months.
My love of tea has led to me truly enjoying Japanese tea ceremonies. It is a very calming experience where I can happily taste delicious tea and snacks. I hope I can continue to learn more about tea in Japan, and my next goal is to visit a tea plantation (as well as to buy even more tea). | https://yosano-kankou.net/alteyes/tea-ceremony/ |
# Serge Mouangue
Serge Mouangue is a Cameroonian born designer and artist, based in Paris, France. He has worked in various design fields and is known for his series of Japanese/Cameroonian art pieces, from which he has gained notoriety in the art world. His art style is based upon ideas of shared belonging, identity, and the similarities between cultures.
## Early life and education
Mouangue was born in Yaoundé, Cameroon. When Mouangue was young, his parents moved their family to Paris. The family moved into a disadvantaged Parisian suburb, inhabited mostly by immigrants. Mouangue's father wanted him to become a judge or engineer, but eventually accepted Mouangue's choice to develop his creative talent. Mouangue later attended schools for interior and industrial design. As part of his course at the ENSCI in Paris, he travelled to Australia on a placement year. There he met and worked with Nobel Prize winning architect Glen Murcutt. Whilst in Australia, he also met his wife, and had his first child. He later travelled to China to work on footwear design.
## Career
Following his schooling, whilst in Australia, Mouangue was approached by Renault and asked to join their team as a designer. He returned to France to work with the company. Mouangue was later sent to work in Japan, as part of the Renault-Nissan Alliance. He worked as part of the Creative Box Inc. a think tank run by Nissan. Whilst working in Japan, Mouangue worked on concept cars such as the 'Kwid' concept car, a car tailored to the clientele of India. Mouangue worked in Japan for five years, but has since stopped working for Renault and now operates as a small business start-up and design correspondent, currently working closely with sustainable materials developer "WooDoo".
Mouangue has continuously drawn and designed in his own time, alongside his career. In Japan, he chose to embark on a side-project aimed at portraying his cultural experiences as a Cameroonian in Japan. His first major artistic project was a series of kimono, developed with a couple traditional kimono makers, first Tokyo based kimono designer Kururi, and then Kyoto-based kimono and fabric designer and producer Odasho, in 2007. His creative platform "Wafrica", was developed as a way to keep creative ownership of his work. Later art pieces such as "Blood Brothers" gained critical acclaim and were exhibited on an international level. Mouangue became a fellow of TED in 2011. Subsequent works include a range of scents, live performances, and collaborative installations with Toyota.
In November 2020, Mouangue participated in a Chatham House, UK-Africa investment summit, to discuss methods of sustainable investment and international cooperation.
## Artistry
Mouangue has developed a term called the 'third aesthetic', which he describes as an in-between, collaborative space that is created when two cultures interact. The ideology derives from Mouangue's first experiences in Japan. He noticed that features of Japanese and Cameroonian culture such as animism, spirituality, and codification of symbols, were shared, and although there were also large differences, there were enough similarities to make Mouangue question ideas of identity and cultural belonging. The term 'third aesthetic' illustrates the main goal and ideological structure of his work. Mouangue dislikes attachment to rigid identities, and through his art, seeks to demonstrate a universal sense of belonging.
'Wafrica' is composed of the term wa (和, "peace" or "harmony", later coming to refer to Japan during the reign of Empress Genmei (707–715 CE)), and the word Africa. The term for Mouangue captures both his Cameroonian, Africa heritage, and his experience in Japan.
### Kimono
Mouangue developed a set of kimono first in collaboration with Japanese kimono maker Kururi, and later with kimono maker Odasho. They were sourced from West African fabrics, made in the Netherlands, alongside classic Japanese features of kimono design. The garments focus on the mixing of two vivid cultures and aim to make the viewer question the rules of identity, opening them up to the idea of a global culture.
The set of kimono have since been shown at the Fesman festival in Dakar, 2010, the Museum of Art and Design in New York, the Museum der Kulturen, Basel, the Etnografiska museet, Stockholm, Tokyo, the Nishi Hongan-ji temple in Kyoto, TICAD VII, Nairobi, 2017, the Charles H.Wright Museum, in Detroit, 2018, the Maison de la Culture du Japon a Paris, and the Victoria and Albert Museum exhibition on kimono in 2020. They were also worn by actress Victoria Abril at Cannes in 2017. On the 20th July, 2021, Mouangue's work was featured in the BBC series "Secrets of the Museum", which showed the preparation and styling choices of the kimono, ready for its tour as part of the "Black Thread and Kimono – Kyoto to Catwalk" event, run by the V&A.
### Blood Brothers
Blood Brothers is a series of wooden human-like sculptures finished in traditional Japanese lacquer work. The sculptures were sourced from pygmy craftsmen in Cameroon and taken to Japan where they were finished with a lacquer coating, done by Okawara Masaru, a traditional lacquer worker. Makaru usually only takes commissions from the Japanese Emperor but agreed to help Mouangue with this one-off piece. Mouangue later presented the sculptures as a tribute to Japan following the Tsunami in 2011. Blood Brothers took two years to complete.
The Blood Brothers were exhibited at the Museum of Art and Design in New York. Mouangue was offered $420,000 by the museum for the artwork but turned down the offer. Blood Brothers have also been shown in La Galerie Paris 1839, Hong Kong, in June 2018.
### Other projects
#### Tea ceremony
Mouangue has also composed live performances to illustrate his ideas of third aesthetic and the Japanese/Cameroonian relationship. In 2009, at the French Institute in Japan, Mouangue put together a performance of a tea ceremony, accompanied by Senegalese musicians. Those acting as hosts were wearing Mouangue's kimono, and the ceremony also featured a naked women posing as a participating spirit.
#### Cosmos
Cosmos is a set of Nigerian masks of fertility, finished by Japanese lacquer worker Nagatoshi Onishi.
#### Hanekaze
Hanekaze, translating to "feather wind" in English, is a collaborative piece between Mouangue, the Toyota Europe Design Development Centre and Eric Charles-Donatien. The piece is meant to represent movement and shared heritage. It was exhibited at the Maison du Japon in Paris from February 18 to March 31, 2020.
#### Golgoth
Another transformation of an African mask, this time from the Bambara people in Mali, Mouangue enlisted Masaru Okawara again to transform the mask with a lacquer coating.
#### Seven Sisters
"Seven Sisters" is an installation displaying a group of 14 women wearing masks and robes. The masks are Punu masks from Gabon. The robes are Cameroonian, made from a Ndop textile by the Bamileke people. The installation also aims at stimulating a feeling of shared heritage.
#### Fragrance
Mouangue is in the process of developing a fragrance which mixes botanical fragrances from African rainforests and Japanese wildflowers. | https://en.wikipedia.org/wiki/Serge_Mouangue |
Get a behind-the-scenes insight into Europe’s first major exhibition dedicated to kimono in this special online discussion exploring ‘Kimono: Kyoto to Catwalk’ at the Victoria and Albert Museum.
Exhibition curator Anna Jackson is joined in conversation by Japan House London’s Programming Director Simon Wright to discuss the development of this exhibition which reveals the sartorial, aesthetic and social significance of the kimono from the 1660s to the present day.
Discover how the garment has been subject to both local and global reinvention, earning it a unique and fascinating place within the history of fashion.
Guests will also have the opportunity to ask questions during the live event.
Learn more about the exhibition Kimono: Kyoto to Catwalk on the website of the V&A.
(Exhibition postponed due to temporary closure of the V&A)
About Anna Jackson
Anna Jackson is Keeper of the Asian Department at the Victoria and Albert Museum and curator of ‘Kimono: Kyoto to Catwalk’ and editor of the accompanying book. She has written widely on Japanese textiles and dress, her publications including ‘Japanese Country Textiles’ (1997), ‘Japanese Textiles in the Victoria and Albert Museum’ (2000), and ‘Kimono: The Art and Evolution of Japanese Fashion – the Khalili Collection’ (2015). Her other major research interest is the cultural relationship between Asia and the West, and she has contributed her knowledge to several V&A exhibitions and their related publications including ‘Art Nouveau 1890-1914’ (2000) and ‘Art Deco 1914-1939’ (2003). In 2004 she was co-curator of the V&A exhibition ‘Encounters: The Meeting of Asia and Europe 1500-1800’. | https://www.japanhouselondon.uk/whats-on/2020/kimono-kyoto-to-catwalk-in-conversation-with-the-v-and-as-anna-jackson/ |
Hi, I’m Sheila Cliffe. This is my first entry for a new column on Yosano’s Tourism website. I’ll be sharing my love of kimono with readers, and I am hoping that some of my stories or observations about kimono will inspire an interest in Japanese clothing.
It will be on this website because for several hundred years Tango has been one of the locations at the heart of the Japanese textile and kimono industry. The history of the Tango area is a history bound up with threads, weaving, dyeing, silk and kimono.
I was born and brought up in the UK, but I have lived my adult life in Japan. From my childhood, I have had an interest in art and fashion. When I came to Japan and saw kimono I instantly fell in love, because the designs on them are so colourful and beautiful, and the fabric is so soft and smooth to the touch. I learned to wear kimono and how to teach kimono dressing, I did some stencil dyeing as a hobby, and I began to study the history of kimono as well. Once I started, it was so interesting that I could not stop. I found that kimono was a lot more than just pretty designs on soft silk.
I went on to write a Ph.D on kimono and publish some books too. “The Social Life of Kimono” Bloomsbury 2017, is based on my thesis and it shows that Japan has had a fashion system long before western style clothing arrived in the 1870s. “Sheila Kimono Style” Tokai Kyoiku Kenkyuu Jo 2018, is a photographic book, which shows kimono fashion throughout one year, and how kimono can be connected with the seasons.
I continue to enjoy teaching, studying and sharing about kimono as my lifework, and producing small fashion shows and various efforts. I hope you will enjoy this column. | https://yosano-kankou.net/en/sheila/1_welcome/ |
Born and raised in Tokyo, Japan, Misako Kambe inherited a great artistic mind from her blood line. Her great-grandfather was a great calligrapher, well-known in both China and Japan, who also was a philosopher and an educator.
Strong influenced by the Mingei-movement, Misako adored the world of the artisan so much in her 20’s that she joined a traditional Kimono studio in Kyoto, as an apprentice specializing in stenciling(paper cutting) for two years. There, she was exposed to the magnificent beauty of the historical Kimono collection in contrast to those worn in everyday use. Besides her brutal apprenticeship, she studied enthusiastically, the design of Kimono and how to make a Kimono in diverse ways.
In 1997, she immigrated to the United States with her husband who founded a nanotech venture company in Silicon Valley.Here, her passion for the art remained intact. Since 1999, she has attended several art courses at Foothill College in California. Finally, she found her way to live as a ceramic artist. Under strict instruction by Bruce George who was a great ceramic artist, mentor, and wonderful teacher at Foothill College, she quickly developed her skills and sophisticated the technique to refine each art work. She has received a couple of awards.
Since these 10 years she fired her pieces mostly in wood fire kiln. She uses mostly Porcelain and carved with three different styles. One is line carving, second is deep relief carving, and third one is water erosion. Most of her pieces are fired for about 30 hours in Noborigama kiln and others are fired for three to four days in Anagama kiln. Both firing are very high temperature firing around 2400F. She has been experimenting with the various effects of ashes and soda deposition onto her carved uneven surface.
She belongs to the tree differnt wood fire kiln sites. One is "Hikarigama-kiln" in Elkton Oregon, one is "Spring Valley Anagama" in Milpitas California and "Richard Carter Studio" in Pope Valley California. Wood firing is such a labor intensive firing that it takes 6 to 10 people cutting and stacking wood for preparation, then loading the pieces and stoking the wood for many hours. Waiting till the kiln gets cool enough then unloading and cleaning the kiln. Even those process requires long hours of hard labor, the rustic and natural results are worth it. | https://abramsclaghornshop.com/products/misako-kambe-cup-w-carvings-1 |
By Norbury J.W.
Read or Download Solutions manual for elementary mechanics and thermodynamics PDF
Similar fluid dynamics books
Smart material systems: model development
The textual content can be utilized because the foundation for a graduate path in any of a number of disciplines which are keen on shrewdpermanent fabric modeling, together with physics, fabrics technological know-how, electromechanical layout, keep an eye on structures, and utilized arithmetic. .. [T]his well-written and rigorous textual content could be worthwhile for somebody drawn to particular clever fabrics in addition to normal modeling and regulate of smart-material habit.
Fluid Mechanics for Chemical Engineers
Geared toward the normal junior point introductory direction on fluid mechanics taken via all chemical engineers, the booklet takes a broad-scale method of chemical engineering functions together with examples in security, fabrics and bioengineering. a brand new bankruptcy has been additional on blending, in addition to stream in open channels and unsteady move.
The second one version (1997) of this article was once a very rewritten model of the unique textual content simple Coastal Engineering released in 1978. This 3rd variation makes numerous corrections, advancements and additions to the second one version. uncomplicated Coastal Engineering is an introductory textual content on wave mechanics and coastal procedures besides basics that underline the perform of coastal engineering.
- Geomorphological fluid mechanics
- Constructive modeling of structural turbulence and hydrodynamic instabilities
- Numerical Methods in Fluid Dynamics
- Methods for Constructing Exact Solutions of Partial Differential Equations
- shock-capturing methods for free-surface shallow flows
- Dielectric Properties of Ionic Liquids
Additional resources for Solutions manual for elementary mechanics and thermodynamics
Example text
A cannon ball is fired horizontally at a speed v0 from the edge of the top of a cliff of height H. e. the range) that the cannon ball travels. Check that your answer has the correct units. SOLUTION v0 H R In the x (horizontal) direction 1 x − x0 = v0x t + ax t2 2 Now R = x − x0 and ax = 0 and v0x = v0 giving R = v0 t. We obtain t from the y direction 1 y − y0 = v0y t + ay t2 2 Now y0 = 0, y = −H, v0y = 0, ay = −g giving 1 −H = − gt2 2 or t= 2H g Substuting we get R = v0 t = v0 2H g Check units: The units of v0 2H g are √ m −1 = m sec sec2 = m sec−1 sec = m m sec−2 which are the correct units for distance.
FORCE & MOTION - II Substitute for T and N into the left equation F cos θ − m2 a − m2 g − µ(m1 g − F sin θ) = m1 a F (cos θ + µ sin θ) − g(m2 + µm1 ) = m1 a + m2 a a= F (cos θ + µ sin θ) − g(m2 + µm1 ) m1 + m2 45 5. If you whirl an object of mass m at the end of a string in a vertical circle of radius R at constant speed v, derive a formula for the tension in the string at the top and bottom of the circle. SOLUTION T W R T W 46 CHAPTER 5. FORCE & MOTION - II Bottom: ΣFy = may mv 2 T −W = R T T mv 2 R mv 2 = mg + R = W+ Top: ΣFy = may mv 2 T +W = R mv 2 T = −W R mv 2 T = − mg R 47 6.
FORCE & MOTION - II 1. A mass m1 hangs vertically from a string connected to a ceiling. A second mass m2 hangs below m1 with m1 and m2 also connected by another string. Calculate the tension in each string. SOLUTION A) B) T’ T m m 1 2 T’ W m 2 2 Obviously T = W1 +W2 = (m1 +m2 )g. The forces on m2 are indicated in Figure B. Thus Fy = m2 a2y T − W2 = 0 T = W 2 = m2 g 39 2. What is the acceleration of a snow skier sliding down a frictionless ski slope of angle θ ? Check that your answer makes sense for θ = 0o and for θ = 90o . | http://www.noscience.se/index.php/epub/solutions-manual-for-elementary-mechanics-and-thermodynamics |
Posted by Lindsey on Monday, July 9, 2012 at 10:34pm.
You can consider the surface area of the sphere (4*pi*r^2) as the surface area of the steel plate. Multiply it with the thickness i.e. 0.5in to get volume of the steel material.
I’ve calculated in SI units:
D=21 ft =6.4 m => R= 3.2 m (outer radius of the tank)
h= 0.5 in= 0.0127 m,
ρ1 =150 lb/ft³=2403 kg/m³ (this is not steel. It may be duralumin)
ρ2 =1000 kg/m³ (water)
V=0.85V2
V1=4πR³/3=4π•3.2³/3=137.26 m³
V2=4π(R-h)³/3=4π•(3.2-0.0127)³/3=135.63 m³
ΔV=V1-V2=137.26-135.63=1.63 m³,
Mass of the tank is
m1= ρ1• ΔV=2403•1.63=3916.9 kg.
Mass of the water is
m2= ρ2•0.85•V2=1000•0.85•135.63= =115285.5 kg.
The weight of the tank with the water is
(m1+m2) •g.
The weight on each of the four support legs is
(m1+m2) •g/4 =119202.4• 9.8/4=
=292045.9 N. | http://www.jiskha.com/display.cgi?id=1341887662 |
By Ash R.B.
Read Online or Download A course in commutative algebra PDF
Best counting & numeration books
Meshfree Particle tools is a finished and systematic exposition of particle tools, meshfree Galerkin and partitition of team spirit tools, molecular dynamics equipment, and multiscale tools. such a lot theories, computational formulations, and simulation effects provided are contemporary advancements in meshfree equipment.
Regularization of Inverse Problems (Mathematics and Its Applications)
Regularization of Inverse difficulties is my favourite a part of learn. .. In civileng. .that is uncommon so i'll recommand this publication for civil engineer in my contry. .good publication thank.
100 Volumes of ‘Notes on Numerical Fluid Mechanics’: 40 Years of Numerical Fluid Mechanics and Aerodynamics in Retrospect
This quantity includes 37 invited contributions, accrued to have a good time 100 volumes of the NNFM sequence. After a common advent overviews are given in 5 elements of the advancements in numerical fluid mechanics and comparable fields. within the first half information regarding the sequence is given, its origins are mentioned, in addition to its setting and the German and eu high-performance machine scene.
Extra info for A course in commutative algebra
Example text
2), sn , hence s, satisfies an equation of integral dependence with coefficients in I. 2 Lemma Let R be an integral domain with fraction field K, and assume that R is integrally closed. Let f and g be monic polynomials in K[x]. If f g ∈ R[x], then both f and g are in R[x]. Proof. In a splitting field containing K, we have f (x) = i (x−ai ) and g(x) = j (x−bj ). Since the ai and bj are roots of the monic polynomial f g ∈ R[x], they are integral over R. The coefficients of f and g are in K and are symmetric polynomials in the roots, hence are integral over R as well.
8). (6) ⇒ (3): By hypothesis, M = M2 , so we may choose t ∈ M \ M2 . But then t + M2 is a generator of the vector space M/M2 over the field R/M. Thus R(t+M2 )/M2 = M/M2 . By the correspondence theorem, t + M2 = M. Now M(M/(t)) = (M2 + (t))/(t) = M/(t), so by NAK, M/(t) = 0, that is, M = (t). ♣. 1)] excludes the trivial valuation v(a) = 0 for every a = 0. 12 Corollary The ring R is a discrete valuation ring if and only if R is a local PID that is not a field. In particular, since R is a PID, it is Noetherian.
Then there is an equation of the form αn + cn−1 αn−1 + · · · + c1 α + c0 = 0 with the ci in V . We must show that α ∈ V . If not, then α−1 ∈ V , and if we multiply the above equation of integral dependence by α−(n−1) , we get α = −cn−1 − cn−2 α−1 − · · · − c1 α−(n−2) − c0 α−(n−1) ∈ V. 5. If I and J are ideals of V , then either I ⊆ J or J ⊆ I. Thus the ideals of V are totally ordered by inclusion. Suppose that I is not contained in J, and pick a ∈ I \ J (hence a = 0). If b ∈ J, we must show that b ∈ I. | http://reecotech.com/epub/a-course-in-commutative-algebra |
Gregory's Classical Mechanics is an immense new textbook for undergraduates in arithmetic and physics. it's a thorough, self-contained and hugely readable account of an issue many scholars locate tricky. The author's transparent and systematic sort promotes an exceptional figuring out of the topic; each one thought is influenced and illustrated through labored examples, whereas challenge units offer lots of perform for figuring out and approach. laptop assisted difficulties, a few compatible for tasks, also are integrated. The publication is dependent to make studying the topic effortless; there's a ordinary development from center subject matters to extra complex ones and difficult subject matters are taken care of with specific care. A subject of the booklet is the significance of conservation ideas. those look first in vectorial mechanics the place they're proved and utilized to challenge fixing. They reappear in analytical mechanics, the place they're proven to be with regards to symmetries of the Lagrangian, culminating in Noether's theorem.
Read Online or Download Classical Mechanics - An undergraduate text PDF
Best fluid dynamics books
Foundations of Fluid Dynamics - download pdf or read online
The mind's eye is affected by the colossal conceptual id among the issues met within the theoretical examine of actual phenomena. it's totally unforeseen and extraordinary, no matter if one experiences equilibrium statistical mechanics, or quantum box thought, or strong nation physics, or celestial mechanics, harmonic research, elasticity, common relativity or fluid mechanics and chaos in turbulence.
New PDF release: An introduction to computational fluid dynamics
This demonstrated, major textbook, is appropriate for classes in CFD. the hot version covers new ideas and strategies, in addition to huge growth of the complicated subject matters and purposes (from one to 4 chapters). This e-book offers the basics of computational fluid mechanics for the amateur consumer.
Read e-book online Reduced kinetic mechanisms for applications in combustion PDF
In most cases, combustion is a spatially three-d, hugely complicated physi co-chemical procedure oftransient nature. versions are as a result wanted that sim to the sort of measure that it turns into amenable plify a given combustion challenge to theoretical or numerical research yet that aren't so restrictive as to distort the underlying physics or chemistry.
Download PDF by Alexander Ya. Malkin, Avraam I. Isayev: Rheology - Concepts, Methods, and Applications
Rheology is a device for chemists and chemical engineers to unravel many useful difficulties. they must study what to degree, tips to degree, and what to do with the information. the 1st 4 chapters of this ebook speak about quite a few elements of theoretical rheology and, by means of examples of many stories, exhibit how specific conception, version, or equation can be utilized in fixing diversified difficulties.
Extra resources for Classical Mechanics - An undergraduate text
Example text
Harder problems carry a star (∗). Rectilinear particle motion 2 . 1 A particle P moves along the x-axis with its displacement at time t given by x = 6t 2 − t 3 + 1, where x is measured in metres and t in seconds. Find the velocity and acceleration of P at time t. Find the times at which P is at rest and find its position at these times. 2 . 2 A particle P moves along the x-axis with its acceleration a at time t given by a = 6t − 4 m s−2 . Initially P is at the point x = 20 m and is moving with speed 15 m s−1 in the negative xdirection.
Relative to an origin O. The centre of mass G of the particles is defined to be the point of space with position vector R= m1 r 1 + m2 r 2 + m3 r 3 + · · · m1 + m2 + m3 + · · · Show that if a different origin O were used, this definition would still place G at the same point of space. 1 . 8 Prove that the three perpendiculars of a triangle are concurrent. [Construct the two perpendiculars from A and B and take their intersection point as O, the origin of position vectors. 7 23 Problems Vector algebra 1 .
3 3- D velocity and acceleration The velocity v and acceleration a of P are defined by v= dr dt and a= dv . 1 for the case of straight line motion are simply related to the corresponding vector quantities defined above. It would be possible to use the vector formalism in all cases but, for the case of straight line motion along the x-axis, r, v, and a would have the form r = x i, v = v i, a = a i, where v = d x/dt and a = dv/dt. It is therefore sufficient to work with the scalar quantities x, v and a; use of the vector formalism would be clumsy and unnecessary. | http://dainandinbartagroup.in/index.php/read/classical-mechanics-an-undergraduate-text |
In the figure, block 1 has mass m1 = 460 g, block 2 has mass m2 = 520 g, and the pulley is on a frictionless horizontal axle and has radius R = 5.0cm. When released from rest, block 2 falls 74 cm in 5.4 s without the cord slipping on the pulley. (a)What is the magnitude of the acceleration of the blocks? What are (b) tension T2 (the tension force on the block 2) and (c) tension T1 (the tension force on the block 1)? (d) What is the magnitude of the pulley’s angular acceleration? (e) What is its rotational inertia? Caution: Try to avoid rounding off answers along the way to the solution. Use g = 9.81 m/s2.
Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes!*
Q: If a cup of coffee has temperature 95°C in a room where the ambient air temperature is 22°C, then, a...
A: Given data The ambient air temperature is given as T=22°C. The equation of temperature is given as ...
Q: If a piece of ice is dropped into a glass of juice heat flows from _
A: Heat always flow from a higher temperature body to a lower body temperature , until an equilibrium t...
Q: (a) What is the speed of a supersonic aircraft with a 17.0-m wingspan, if it experiences a 1.60-V Ha...
A: The width of the wingspan is L = 17 m. The Hall voltage between the wing tips is ε = 1.60 V. The Ear...
Q: 2.5 kg of water (c = 4190 J/(kg⋅K)) is heated from T1 = 14° C to T2 = 24° C. Input an expression fo...
A: Click to see the answer
Q: What types of physicists are there?
A: There are two types of physicist, Experimental Physicist: These physicist analyses the experiments a...
Q: Suppose a woman does 350 J of work and 9700 J of heat is transferred from her into the environment i...
A: Work done = 350 Joules Heat transferred into environment = 9700 Joules
Q: a) For the circuit shown, use the node-voltage method to find v1, v2, and i1. 4.1 b) How much power ...
A: Click to see the answer
Q: ils of a solenoid of length 0.130 m carry a current of 95.0 A, it produces a magnetic field of 7.00 ...
A: Click to see the answer
Q: T3 3.) If an object suddenly explodes into two pieces, which of the following is true? (Choose all t...
A: Click to see the answer
Q: 5. An aviation fuel used by a piston engine aircraft transforms 100,042 J of energy through the foll...
A: Input energy by aviation fuel = 100042 JoulesHeat produced in the engine = 46721 JHeat used by trans...
Q: A contestant in a winter games event pushes a 32.0 kg block of ice across a frozen lake as shown in ...
A: Click to see the answer
Q: A falling package with a parachute is greatly affected by air resistance. Suppose a package (m = 25 ...
A: Given data: Mass of package is, m=25 kg. The altitude is of, y=1500 m. The speed is, v=45 m/s.
Q: Three objects of equal mass m1 = m2 = m3 = 2.50 kg are located on the vertices of an equilateral t...
A: Click to see the answer
Q: A 0.15-kg toy car is initially moving at 1.3 m/s. Since the battery is losing power, the toy car has...
A: Click to see the answer
Q: A cup of coffee has a mass of 0.377 kg. If you place this cup of coffee on a desk and let is sit the...
A: mass of cup of coffee = 0.377 kg acceleration due to gravity (g) = 9.8 m/s2 To find : force exerte...
Q: How do fluorescent soap residues make clothing look “brighter and whiter” in outdoor light? Would th...
A: Fluorescent soap shines in ultraviolet light. it has wavelength of ultraviolet light (not visible wi...
Q: A large power plant generates electricity at 12.0 kV. Its old transformer once converted the voltage...
A: GIVEN : Vold=335 kVVnew=750 kV (a) Ratio of no. of turns in secondary coil : Ns,newNs,old=VnewVold P...
Q: If two different forces can accelerate the same block from rest to the same final speed, then the po...
A: Given, Two forces accelerated a block from rest to a final speed
Q: A circular curve of highway is designed for traffic moving at60 km/h. Assume the traffic consists of...
A: The speed of a car is, v=60 km/h=16.67 m/s The radius of a curve is, r=150 m
Q: A motor operating on 240 V electricity has a 180 V back emf at operating speed and draws a 12.0 A cu...
A:
Q: A deep sea diver should breathe a gas mixture that has the same oxygen partial pressure as at sea le...
A: Partial pressure of oxygen is (20.9/100)*1.01*105 N/m2 = 2.12*104 N/m2
Q: Mass A is 40 kg and mass B is 10 kg. The angle that the inclined plane makes with the horizontal is ...
A: Click to see the answer
Q: (a) What is the efficiency of an out-of-condition professor who does 2.30 ✕ 105 J of useful work whi...
A: Click to see the answer
Q: )The current through the 50.0 resistor in the circuit below is 0.14 A. Determine the & of the batter...
A: From Ohm’s law, Thus, the voltage across the resistor R3 be:
Q: a) Use voltage division to determine the voltage v, across the 40 N resistor in the circuit shown. 3...
A: To determine: Voltage v0 Current through 40 Ω and 30 Ω resistor. Power absorbed by 50 Ω resistor.
Q: Monochromatic radiation at 400 nm, produced by a laser, is completely absorbed by areaction mixture ...
A: Solution:
Q: at, circular loop has 25 turns. The radius of the loop is 14.0 cm and the current through the wire i...
A: Click to see the answer
Q: (a) What is the momentum (in kg · m/s) of a proton moving at 0.662c? kg · m/s (b) At what spee...
A: (a) momentum of the proton prel=m0v1-v2c2=1.67×10-27 kg0.6223.0×108 m/s1-0.662c2c2=4.157×10-19 kg·m/...
Q: Two bodies having masses m1=250g (left side) and m2=450g (right side) attached to spring negligible ...
A: The free-body diagram of the pulley-mass system is shown below. Here, T denotes the tension and a d...
Q: Suppose the resistance of a wire is 2 Ω. If the wire is stretched to three times its length, what wi...
A: Let us consider a wire having a resistance,(R) in Ohm. Also (ρ) be the resistivity of the wire in Oh...
Q: A certain light truck can go around a flat curve having a radius of 150 m with a maximum speed of 26...
A: radius of plat curve r1=150m r2=80.5m speed of truck when radius at 150m is v1=26.5m/s at radius 80...
Q: A "low-Earth orbit" refers to a satellite whose circular orbit is only slightly above the surface of...
A: Click to see the answer
Q: Blood is flowing through an artery of radius 1.9 mm at a speed of 43 cm/s. Determine the flow rate ...
A: The radius of the artery is r=1.9 mm=0.19 cm Hence the area is A=πr2 If the velocity of the flow is ...
Q: what is the power of a crane that lifts it uniformly in a hurry to a height of 5 meters during 5 sec...
A: The work done is calculated as, W=mghW=200×9.8×5W=9800 J
Q: A moonshiner makes the error of filling a glass jar to the brim and capping it tightly. The moonshin...
A: Expression for Bulk modulus k=V∆P∆V Rearrange in terms of ∆P ∆P=B∆VV=1.8×109 Nm25×10-3=9×106 Nm2m102...
Q: How many volts are supplied to operate an indicator light on a DVD player that has a resistance of 1...
A: Resistance (R)=140 Ω Current (I)=25 mA
Q: An iceboat is at rest on a frictionless frozen lake when a suddenwind exerts a constant force of 200...
A: The work done on the boat by the wind force can be given by W=F→d→cosθ
Q: A fairgrounds ride spins its occupants inside a flying-saucer-shaped container. If the horizontal ci...
A: Radius of the ride (r)=6 m Centripetal acceleration (a)=1.1g=1.1×9.8=10.78 m/s2
Q: A mass m = 4.25 kg is at the end of a horizontal spring on a frictionless horizontal surface. The ma...
A: The expression for mechanical energy is, E=12kA2 (I) k=spring ...
Q: A hand pushes two blocks across a frictionless pond of ice. Block A has a mass of 1 kg and block B h...
A: Force, F = 15 N mass of A, mA = 1 kg mass of B, mB = 4 kg
Q: Two uncharged spheres are separated by 2.30 m. If 2.50 ✕ 1012 electrons are removed from one sphere ... | https://www.bartleby.com/questions-and-answers/in-the-figure-block-1-has-mass-m1-460-g-block-2-has-mass-m2-520-g-and-the-pulley-is-on-a-frictionles/d5d83c2f-3d94-4324-80df-c79ccd2af4c2 |
Visions michio kaku pdf free download
The miscellaneous distinct universes within the multiverse are called the “parallel universes”, “other universes” or “alternative visions michio kaku pdf free download”. He said that, when his equations seemed to describe several different histories, these were “not alternatives, but all really happen simultaneously”. That is the earliest known reference to the multiverse. 1895, but in a different context.
The structure of the multiverse, the nature of each universe within it, and the relationships among these universes differ from one multiverse hypothesis to another. Prominent physicists are divided in opinion about whether any other universes exist. Some physicists say the multiverse is not a legitimate topic of scientific inquiry. Concerns have been raised about whether attempts to exempt the multiverse from experimental verification could erode public confidence in science and ultimately damage the study of fundamental physics. Around 2010, scientists such as Stephen M.
In addition, there was no evidence of any gravitational pull of other universes on ours. For a start, how is the existence of the other universes to be tested? To be sure, all cosmologists accept that there are some regions of the universe that lie beyond the reach of our telescopes, but somewhere on the slippery slope between that and the idea that there are an infinite number of universes, credibility reaches a limit. As one slips down that slope, more and more must be accepted on faith, and less and less is open to scientific verification. Extreme multiverse explanations are therefore reminiscent of theological discussions. Indeed, invoking an infinity of unseen universes to explain the unusual features of the one we do see is just as ad hoc as invoking an unseen Creator. | http://studiesoflife.eu/visions-michio-kaku-pdf-free-download/ |
Many people believe that parallel universes exists but do they actually exists. Does Science has any proofs ?
At present, there are no experimental proofs but there are some facts that indicates the existence of parallel universes.
We know that balance is important for this universe or multiverse to exist. There should be an equal amount of positive and negative energies. There should be an equal amount of Good and Evil. There can't be any partiality. Similarly, for any event to occur, the multiverse must cover all the possible outcomes.
For example - if we throw a dice in the universe in which we are living and we got six then same dice must be thrown in the parallel universes to cover all the possible outcomes. Hence, on throwing a dice, we will get one in 1 universe, two in other universe, three in another universe and so on. Hence, six parallel universe will cover the possible outcomes of throwing a dice.
This sounds interesting but it is true. Parallel universes covers all the possible outcome of an event.
Suppose, Hitler may have won the 2nd world war in other parallel universe or America and Russia may be very good friends in other parallel universe.
Concept of Deja Vu
Another fact is "Deja Vu". Sometimes, we get a sensation that event which is occurring has already happened before. This is because that same event may have occurred with us in other parallel universe. Our existence is not restricted to a single universe, we have multiple copies coexisting in several parallel universes. | http://logicalhindu.com/how-parallel-universes-exists/ |
The fifth and final season of ‘fringe’ premiered at the end of September 2012. Almost a decade has passed, and fans of quality science fictiondespite the whirlwind of premieres offered by video platforms in streaming Currently, in a way we are still orphans. Of one thing we can be sure: the series starring Olivia Dunham, Peter Bishop and Walter Bishop is unrepeatable.
And it is for several reasons. One of the most obvious is how irresistible this trio is. Olivia, who is played by Anna Torv, and Peter, who is played by Joshua Jackson, are bound to grow fond of them, but in my opinion, the real heart of this series is Walteran eccentric and endearing character played with absolute believability by the one and only John Noble (he blew many of us away in 2003 by the way he appropriated Denethor in ‘The return of the King’from the ‘Lord of the Rings’ trilogy).
In any case, this article is not a typical film review. What I propose is that we investigate the real reason why this series has made such a deep impression on many science fiction enthusiasts: its scientific basis. In ‘Fringe’ there is a lot of fantasy, it’s obvious, but some of the ideas that he proposes have a solid germ with which, in some way, current science is flirting.
Before we go any further, here’s a little warning for readers who haven’t seen this series yet: this article contains spoilers. I will try not to gut the plot of any chapter in particular, but it is inevitable that we go through the plot line that forms the backbone of the central story that it is telling us. If you haven’t enjoyed them yet, you may prefer to reserve this text and read it later. Whatever you decide, welcome.
The backbone of ‘Fringe’: the theory of the multiverse
Current science flirts not with one, but with several theories that propose the existence of a multiverse made up of a large number of universes that coexist. The bubble universe theory speculates on the possibility that each of them is expanding and may contain others inside. The one of the infinite universes defends the possibility that each one of them is expanding in an unlimited way through the space-time continuum and resides in a different plane that allows an infinity of them to coexist.
The main source of inspiration for the writers of ‘Fringe’ is the theory of parallel universes
However, the theory we are interested in dwelling on because it is the one that seems to have served as the main source of inspiration for the ‘Fringe’ writers is that of the parallel universes. Very broadly, it postulates the existence of multiple universes that coexist in the same space-time fabric, but that are housed in different dimensions that allow them to remain cohesive without coming into conflict. These are not the only theories that propose the existence of a multiverse, but the three that we have just briefly noticed are enough to illustrate the germ that inspired Walter Bishop. In the series, of course.
This all sounds very fanciful, yes. And, of course, we cannot ignore the fact that all of them are theories that have not yet been reliably confirmed by the scientific community. But, and here comes the most surprising twist, most of these hypotheses is supported by evidence and measurements collected in experiments involving instruments that have indisputable technical value, such as, for example, the planck telescope or the LHC at CERN.
We cannot rule out that the interpretation of the data that has invited some scientists to develop these theories is wrong, but we must not overlook the fact that many physicists who enjoy great international prestige are involved in their formulation, such as John Archibald Wheeler, Richard Holman or Max Tegmark, among many others. In any case, there is no doubt that the writers of ‘Fringe’ have managed to get a lot out of this idea and have not avoided draw on your imagination when it comes to proposing an answer to some of the innumerable questions that science cannot yet resolve.
Other science fiction proposals have traveled the same paths that ‘Fringe’ travels before, but this latest one goes into them with an ingenuity and an ambition that have managed to captivate many enthusiasts of this genre
After all, and this is the main leitmotiv of this series, Walter Bishop demonstrates the existence of a parallel universe, and, to make matters worse, find a way to travel to it. This possibility gives rise to an infinity of bizarre and unprecedented situations, but tremendously entertaining. And all this takes place, and here comes another spoiler, under the intimidating gaze of beings whose humanoid appearance fails to hide capabilities that are out of the compression, at least initially, of the trio that stars in this series.
A brushstroke of science here and another more voluminous one of fiction there
‘Fringe’ has very marked cinematographic references. In fact, JJ Abrams, who is one of its creators, has openly acknowledged having been inspired by David Cronenberg’s films, and also having taken some ideas from other mythical series, such as ‘The X Files’ or ‘The unknown dimension’. And it is that in reality what this series proposes to us It is not new. Other science fiction proposals have traveled the same paths that ‘Fringe’ travels before, but this latest one goes into them with an ingenuity and an ambition that have managed to captivate many enthusiasts of this genre.
Anyone who has enjoyed this series knows that, as we have seen, its main plot line is based on the existence of parallel universes. However, this is not the only idea that Abrams and his collaborators have taken from current science with the clear purpose of shed shreds of credibility about the mountain of science fiction, and often even fantasy, on which ‘Fringe’ stands.
The sensory deprivation chamber that Olivia is so often subjected to is unmistakably borrowed from the sensory isolation tanks used since the 1950s by some therapists due to its presumed beneficial impact on the nervous system. And the device implanted in the Watchers’ brains that gives them superhuman abilities it is nothing but a kind of implant clearly inspired by some of the devices that some supporters of transhumanism are already using today.
‘Fringe’ has many other ingredients that make it very attractive to science fiction enthusiasts. In this article we have investigated some of them, but for those who have not yet seen it, it will be good to be certain that, above all, this is an extremely entertaining series. Some chapters lower the quality bar defended tooth and nail by the most inspired episodes of this fiction, but all of them waste ingenuity. Any time is a good time to discover this series. Or for those of us who have already enjoyed it to revisit it. It is currently available in its entirety on HBO Max. | https://medswhite.com/it-is-impossible-to-resist-the-physics-that-this-science-fiction-series-offers-us-it-will-take-a-long-time-to-be-overcome/ |
Stephen Hawking submitted the final version of his last scientific paper entitled “A Smooth Exit from Eternal Inflation” just two weeks before he died, and it lays the theoretical groundwork for discovering a parallel universe.
Hawking was co-author to a mathematical paper which seeks proof of the “multiverse” theory, which posits the existence of many universes other than our own. The paper, called “A Smooth Exit from Eternal Inflation“, had its latest revisions approved on March 4, ten days before Hawking’s death.
The contents of the paper sets out the mathematics necessary for a deep-space probe to collect evidence which might prove that other universes exist.
The highly theoretical work posits that evidence of the multiverse should be measurable in background radiation dating to the beginning of time. This in turn could be measured by a deep-space probe with the right sensors on-board.
Well Hauking sorry for that, but you can’t receive the Nobel Prize posthumously. | https://strangesounds.org/2018/03/stephen-hawking-discovering-a-parallel-universe.html |
It is important to keep in mind that the multiverse view is not actually a theory, it is rather a consequence of our current understanding of theoretical physics. This distinction is crucial. We have not waved our hands and said: “Let there be a multiverse.” Instead the idea that the universe is perhaps one of infinitely many is derived from current theories like quantum mechanics and string theory.
But how do we interpret this to make any practical sense at all? One popular way is to think of all these possibilities as bookkeeping devices so that the only “objectively true” cat state is the one we observe. However, one can just as well choose to accept that all these possibilities are true, and that they exist in different universes of a multiverse.
String theory is one of our most, if not the most, promising avenue to be able to unify quantum mechanics and gravity. This is notoriously hard because gravitational force is so difficult to describe on small scales like those of atoms and subatomic particles – which is the science of quantum mechanics. But string theory, which states that all fundamental particles are made up of one-dimensional strings, can describe all known forces of nature at once: gravity, electromagnetism and the nuclear forces.
During the very early universe, the universe underwent a period of accelerated expansion called inflation. Inflation was invoked originally to explain why the current observational universe is almost uniform in temperature. However, the theory also predicted a spectrum of temperature fluctuations around this equilibrium which was later confirmed by several spacecraft such as Cosmic Background Explorer, Wilkinson Microwave Anisotropy Probe and the PLANCK spacecraft.
While the exact details of the theory are still being hotly debated, inflation is widely accepted by physicists. However, a consequence of this theory is that there must be other parts of the universe that are still accelerating. However, due to the quantum fluctuations of space-time, some parts of the universe never actually reach the end state of inflation. This means that the universe is, at least according to our current understanding, eternally inflating. Some parts can therefore end up becoming other universes, which could become other universes, etc. This mechanism generates a infinite number of universes.
The universes predicted by string theory and inflation live in the same physical space (unlike the many universes of quantum mechanics which live in a mathematical space), meaning they can overlap or collide. Indeed, they inevitably must collide, leaving possible signatures in the cosmic sky which we can try to search for.
The problem I (and many physicists and cosmologists) have with the multi-verse theory is it seems to be an all too convenient explanation for the extraordinarily unlikely physical values of our universe that have been discovered. Not to mention the fact that other universes are be definition forever closed off from our own and thus beyond the means of proving their existence scientifically. That makes the multi-verse a philosophical, not a scientific concept.
It would be much more scientific to stick with what we can prove, which is that there is just one universe. And then try to explain how exactly the very odd physical values we observe arose.
But to prove that there is only one universe, you have to disprove the existence of others, so same problem.
I agree that at the moment it may be more philosophical, but there is good historical precedent. At one point, we thought there was only one sun- we didn’t have the technology to prove otherwise. Then we figured out our sun is one of billions in our galaxy, but there was only one galaxy. And then in turn we figured out there are billions of galaxies, etc.
So in that light, maybe some day we’ll actually detect the existence of other universes (or not).
It then raises another fun question- what’s beyond the multiverse? Heh.
“But to prove that there is only one universe, you have to disprove the existence of others” No problem, hence my comment, below.
Infinite universes leave no room for anything else beyond.
An infinite space XOR time realizes all possibilities in all variations. We are unique in only this infinitesimal.
one, in of itself, is evidence of 2 or more.
look around you…where else in nature do you see only one of a type of anything?
They are only extraordinary and weird if you perceive them that way. How is something ought to exist extraordinary?
Does this mean we should believe the imaginary is real?
Opening a box to find a cat that is both dead and alive, is just a paradox, one of many, resulting from our limited mathematical logic applied to what we know of QM.. The set of sets (Russell) is another, an infinity of infinities, some greater, some lesser, is yet another.
Mathematics can model every possible event, if all the input is valid.
Where it fails, is when you must insert assumptions, rather than empirically observed facts. This applies to all but the most simple and contained chaotic system.
The computer that can successfully model a chaotic state needs an infinity of input, as well as complete quantification of an initial state. That lies in the realm of fortune telling.
It is yet beyond our capability to produce such a machine.
So string theory, multiverses, cats both dead and alive, memes and other interesting speculations, are like the poster below says, venturing into the philosophical. Whatever gets the physicists through the night!
3) Interaction adds degrees of freedom to thermodynamics.
1) 45 years of quantum gravitation and SUSY are empirically sterile.
2) Boson photon vacuum rules fail toward fermionic quark matter. Parity violations, symmetry breakings, chiral anomalies, baryogenesis, Chern-Simons repair of Einstein-Hilbert action are vacuum trace chiral anisotropy toward hadrons.
3) Opposite shoes within a vacuum left foot have different energies.
4) Blow a cryogenic molecular beam of racemic D_3-4-oxatrishomocubane (rigid cage molecule, 8 chiral centers of 11 skeletal atoms, big dipole moment, facile gram synthesis) through a chirped-pulse FT microwave spectrometer. If rotational spectra (temperatures) of the racemate’s enantiomers are not degenerate, vacuum chirality toward matter is measured.
5) Physics is quantitatively repaired.
Cosmos is eternal entanglement (equator) of self-contradiction, absolute unity of relative infinite units, oneness of pairness.-Aiya-Oba (Natural philosopher and discoverer of Nature’s absolute logic).
Clap one hand, record it video and audio, play it back.
(3,472,073)^7 + (4,627,011)^7 = (4,710,868)^7, falsifying Fermat’s Last Theorem. Go ahead, multiply it out…or demonstrate 7 + 1 does not equal 2.
Actually it does equal 2 PAIRS of 4.
Congratulations Uncle Al.- Aiya-Oba (Discoverer of Nature’s absolute logic and state).
What’s more likely: that there are in fact multiple universes in existence, or that the theories upon which that likelihood is based are wrong?
“it is rather a consequence of our current understanding of theoretical physics.” This is just string theory propaganda, plain and simple. The whole idea that there is experimental evidence for all of this is just a big fat lie; it just isn’t true. Now, someday, yes, we might discover something, this is always true. Chances are it will be a new particle, a new theory, a new adventure, and we can look forward to this. In this universe.
Fascinating thought experiment at the present, but perhaps one of the most profound discoveries in science if it can be proven. One of the gravity wave detector instruments referenced in the article, LIGO, is here in Louisiana near the town of Livingston. It’s recently undergone an upgrade to increase it’s sensitivity and give it a better chance of detecting gravitational waves. This should give a better understanding of black hole formation, neutron star mergers, and other massive distortions of space time, in addition to the detection of possible signatures from parallel universes. Though some may question the pursuit of such knowledge, apart from the intangible joy of discovery and learning more about how our universe (and yes, possibly others) works, there are bound to be many currently unforeseen benefits from this expanding pool of cosmic knowledge. After all, one hundred years ago, who knew how useful Einstein’s Theories of Relativity and atomic theory would be?
Existence of our universe is rock evidence of existence of multiverse.
Unity of infinity, equator (entanglement ) of self-contradiction, eternal oneness of pairness relativity, is Nature’s absolute logic and state, the self-creator of All in all (Cosmos). | http://blogs.discovermagazine.com/crux/2015/09/03/universe-many/ |
How can you find a parallel universe if you don't even know it's there?
To put it bluntly: If there is a parallel universe, we will never know about it. The existence of any universe other than the one we can see and make measurements in is pure conjecture and speculation.
Parallel universes have never been observed. They have been speculated in a great many ways, based on symmetry arguments and other mathematical calculations, intuition, or even religion (Heaven and Hell are essentially parallel universes, if you believe in them).
If you can't measure them, but cannot measure that they do not exist, then they are not science. This does not mean that they do not exist, but merely that their existence is not a question that science can answer (not until we could measure their existence or lack thereof, that is). | http://scienceline.ucsb.edu/getkey.php?key=425 |
The universe is incredibly finely-tuned, not only for its own existence, but for the existence of complex, intelligent life. This fact does not set well with naturalists and atheists. It is enormously difficult to explain the unfathomable specificity and precision of the cosmos on the basis of chance alone. Indeed, the value of some physical constants were initial conditions present at the universe’s origin, and thus cannot possibly be explained by random chance processes. So how do non-theists explain how our universe got so lucky?
While there are a few different approaches floating out there, the one garnering the most attention and support recently is the multiverse hypothesis (a.k.a the Landscape). Multiverse theory proposes the existence of a near-infinite number of universes. Given the multitude of universes–it is reasoned–there is bound to be at least one that is life-permitting. As David Berlinski writes, “[B]y multiplying universes, the Landscape dissolves improbabilities. To the question What are the odds? the Landscape provides the invigorating answer that it hardly matters.”
Scientist who subscribe to the multiverse view it as the only viable naturalistic alternative to a divine creator. As Tim Folger wrote:
Physicists don’t like coincidences. They like even less the notion that life is somehow central to the universe, and yet recent discoveries are forcing them to confront that very idea. Life, it seems, is not an incidental component of the universe, burped up out of a random chemical brew on a lonely planet to endure for a few fleeting ticks of the cosmic clock. In some strange sense, it appears that we are not adapted to the universe; the universe is adapted to us.
Call it a fluke, a mystery, a miracle. Or call it the biggest problem in physics. Short of invoking a benevolent creator, many physicists see only one possible explanation: Our universe may be but one of perhaps infinitely many universes in an inconceivably vast multiverse. Most of those universes are barren, but some, like ours, have conditions suitable for life.
The idea is controversial. Critics say it doesn’t even qualify as a scientific theory because the existence of other universes cannot be proved or disproved. Advocates argue that, like it or not, the multiverse may well be the only viable nonreligious explanation for what is often called the “fine-tuning problem”-the baffling observation that the laws of the universe seem custom-tailored to favor the emergence of life.
What I find particularly interesting is how fine-tuning is viewed as a problem in the first place. No theist would view it as a problem. It is only problematic to atheists and naturalists because it implies a designing intelligence, and such a being is anathema to them. In order to avoid the obvious conclusion that an intelligent being was responsible for fine-tuning the universe for existence and life, they propose a naturalistic theory that is, admittedly, not even scientific (because it is neither provable nor falsifiable). Proponents of the multiverse are honest about this fact. Consider Andre Linde. When asked if physicists will ever be able to prove the multiverse hypothesis, he responded:
“Nothing else fits the data. We don’t have any alternative explanation for the dark energy; we don’t have any alternative explanation for the smallness of the mass of the electron; we don’t have any alternative explanation for many properties of particles. What I am saying is, look at it with open eyes. These are experimental facts, and these facts fit one theory: the multiverse theory. They do not fit any other theory so far. I’m not saying these properties necessarily imply the multiverse theory is right, but you asked me if there is any experimental evidence, and the answer is yes. It was Arthur Conan Doyle who said, ‘When you have eliminated the impossible, whatever remains, however improbable, must be the truth.’?”
In other words, it doesn’t need to be proven by evidence. It doesn’t even need to be probable. It only needs to be the last man standing. I’ll agree with Linde that no other naturalistic hypothesis has more explanatory power than the multiverse (even though it has no empirical support), but when the list of live options is expanded beyond naturalistic hypotheses, there is a better explanation of the data: theism. But Linde excludes theism a priori from the list of live options. Why do that? Theism has more explanatory plausibility and rational evidence in its favor than the multiverse, and thus should be preferred.
The reason those like Linde take the multiverse hypothesis seriously, is not because they are following the evidence where it leads, but because the evidence points to a designer of the universe, and they wish to avoid such a being at all costs, even if it means believing in an improbable, improvable theory. As Bernard Carr, a cosmologist at Queen Mary University of London said, “If there is only one universe you might have to have a fine-tuner. If you don’t want God, you’d better have a multiverse.” Apparently “it is better to have many worlds than one God.” If ridding themselves of one supposed fairy tale (theism) requires belief in another, so be it.
The father of multiverse theory, Leonard Susskind, is very clear about the anti-theistic motivations of theories such as the multiverse. When asked if we are stuck with an intelligent designer if his Landscape theory doesn’t pan out, he responded:
I doubt that physicists will see it that way. If, for some unforeseen reason, the landscape turns out to be inconsistent – maybe for mathematical reasons, or because it disagrees with observation – I am pretty sure that physicists will go on searching for natural explanations of the world. But I have to say that if that happens, as things stand now we will be in a very awkward position. Without any explanation of nature’s fine-tunings we will be hard pressed to answer the ID critics. One might argue that the hope that a mathematically unique solution will emerge is as faith-based as ID.
His point could not be clearer. The desire of naturalists is to find a plausible naturalistic explanation on par with the design hypothesis is their driving motivation. Any theory will do, even if, according to Susskind, it is as faith-based as Intelligent Design. It appears that blind faith is acceptable in science, so long as its object is not God. They’ll blindly believe in the existence of universes they cannot see, but not in the existence of a God who has made Himself known in the very cosmos they study.
David Berlinski, The Devil’s Delusion: Atheism and Its Scientific Pretensions(New York: Crown Forum, 2008), 124.Tim Folger, “Science’s Alternative to an Intelligent Creator: the Multiverse Theory” in Discover magazine; available from http://discovermagazine.com/2008/dec/10-sciences-alternative-to-an-intelligent-creator; Internet; accessed 11 November 2008.
David Berlinski, The Devil’s Delusion: Atheism and Its Scientific Pretensions (New York: Crown Forum, 2008), 135.Leonard Susskind, in an interview with Amanda Gefter of New Scientist, “Is String Theory in Trouble?”, December 17 2005 edition, p. 48; available from http://www.newscientist.com/channel/fundamentals/mg18825305.800.html; Internet; accessed 5 January 2006. | https://thinkingtobelieve.com/2008/12/02/the-multiverse-god-and-reason/ |
Are Parallel Universes Real? Here Are Physicists' Leading Multiverse Theories
Right now there might be a whole other universe where instead of brown hair you have red hair, or a universe where you're a classical pianist, not an engineer. In fact, an infinite number of versions of you may exist in an infinite number of other universes.
The idea sounds like science fiction, but multiverse theories — especially those that are actually testable — are gaining traction among physicists. Here are three of the most compelling theories:
If the universe is infinite, multiple universes probably exist.
If the universe is infinite, like many believe it is, then there must be huge patches of the universe that are simply too distant for us to see.
Our own universe is defined by the sphere-shaped amount of light that has had time to reach us. The universe is 13.8 billion years old, so any patches more than 13.8 billion light-years away aren't visible to us. In that sense, multiple universes exist outside our own visible universe simply because the light from them hasn't had enough time to reach us.
The implications are mind-bending.
"If the universe is truly infinite, if you travel outwards from Earth, eventually you will reach a place where there's a duplicate cubic meter of space," Fraser Cain explained to Universe Today. "The further you go, the more duplicates you'll find."
That means there could be another you out there in the universe — or an infinite number of yous.
Scientists are trying to figure out if the universe is finite or infinite by studying signatures in the cosmic microwave background, or the radiation left over from the Big Bang. But the bottom line, according to physicist Joseph Silk, is, "we may never know."
The Big Bang and inflation suggest the existence of a vast multiverse.
The Big Bang theory suggests that when the universe was just a fraction of a second old, it underwent a period of rapid inflation where it "expanded faster than the speed of light," according to Space.com. Expansion then slowed down, but there's lots of evidence that it kept happening and is still happening.
Some physicists think parts of space-time may have expanded faster than others after the Big Bang, creating "bubble" universes.
So if inflation is real, our universe might just be a bubble floating in a whole bubble bath of other sphere-shaped universes.
Inflation is carrying them farther and farther away from us, so we'd have to invent faster-than-light travel if we ever wanted to visit one.
"It's hard to build models of inflation that don't lead to a multiverse," Alan Guth, a theoretical physicist from Massachusetts Institute of Technology, said in 2014. "It's not impossible, so I think there's still certainly research that needs to be done. But most models of inflation do lead to a multiverse, and evidence for inflation will be pushing us in the direction of taking [the idea of a] multiverse seriously."
Some physicists think it's possible to prove the bubble idea. When our own bubble universe was first forming, it may have collided with another bubble universe before inflation separated us.
Physicists like Matthew Johnson are searching through the cosmic microwave background (radiation leftover after the Big Bang) for signs of collisions. Johnson thinks the collisions might have left behind visible bruises:
Gravity is hiding in other universes
Physicists have no idea why gravity is so much weaker than the other fundamental forces. Some theories imply the existence of parallel universes.
"A small fridge magnet is enough to create an electromagnetic force greater than the gravitational pull exerted by planet Earth," the European Organization for Nuclear Research, better known as CERN, explains. "One possibility is that we don't feel the full effect of gravity because part of it spreads to extra dimensions. Though it may sound like science fiction, if extra dimensions exist, they could explain why the universe is expanding faster than expected, and why gravity is weaker than the other forces of nature."
Physicists are actually searching for evidence of other dimensions inside the Large Hadron Collider at CERN. Entire universes full of red-headed, piano-playing yous could be hiding inside those extra dimensions. | https://www.mic.com/articles/138595/are-parallel-universes-real-here-are-physicists-leading-multiverse-theories |
What is a parallel universe in fiction or reality? It is basically a science fiction theory that entails the presence of multiple universes. The Parallel Universe theory finds its base in Big Bang Theory. The reason of its relation with Big Bang theory is the definition of universe creation as a result of a collision or blast of non-terrestrial bodies packed together in a place smaller than an atom. Yes, you read it right, the big bang defines the universe as something that came into existence as a result of blast that expanded a body smaller than atom. So, what is a parallel universe?
It is unknown, how and what caused everything to come as much closer to one another. But it was all compressed to a smaller place as per scientists. And the Big Bang theory is endorsed by a wider section of scientist around the globe. It is unknown what caused the blast we know as Big bang, and it is also unknown where from that energy came to compress and finally decompress such a closely packed body. But, in science a hypothesis becomes a theory if there comes no one to oppose it.
For now, it is about the existence of parallel universe. The parallel universes are imaginary universe or multiple universes also known as multiverse, so to say. And this comes from the possibility of multiple big bangs taking place all over the different places in an infinitely large universe. The infinitely large universe comes from the inflation theory that tells that the universe is inflating or expanding at an enormous pace. This theory is also termed as cosmological inflation of universe.
Why Parallel Universe Exists?
It tells that the universe came out of a body compressed in an areas smaller than an atom. Than due to unknown factors it blasted and resulted into the formation of observable universe. Additionally, the resulting universe keeps inflating all the time. Moreover, it states that there could be many such small compressed elements present in places unknown to humans or in space. How vast is space or universe? No one knows and also there are no means available.
The idea of universe resulting from a blast is due to the scattered appearance of planets, moons, meteoroids, asteroids, and other bodies that are wavering or wandering in the space. Have a look at the blast in the gif below and ride on the horse of imagination. The central part represents an atom and the blast resulted into things scattering that were previously compressed as per Big Bang theory. The only difference in the explosion below and in the universe is that the particles in the image below are settling due to the pull of gravity. Whereas, the case of space is different due to an absence of gravity.
How there could be many other such smaller than atom bodies in far unknown places? The answer to this complex yet interesting question is the outcomes of a quantum physics experimentation. In a double slit experiment when an atom is shooed, its path could be guessed without any surety and its place of final appearance could not be predicted. And thus you are left to deal with the possibilities. A part of this might be some natural phenomenon that we can’t yet predict or know.
How it tells what is a Parallel Universe?
Anyways, the closely packed substance that was supposed to be smaller than the atom could be many in various places of the vast universe. And thus there could be many universes that could be identical to our universe. There arises another question, how many universe are there? It is a debatable question and the discussion goes interesting. It will be uploaded soon.
There is a possibility of those universes being different to our universe but that too must have calculated or predictable changes. Parallel universe and multiverse also contain a concept that arises from the discussion of alternate realities.
Opposition to Parallel Universe Theory
There is a criticism on Parallel universe theory in educated sections of society. And that arises from the very base of the existence of universe and the big bang theory. The big bang theory is famously known as the foundational block of scientific explanation of existence of life. However, it is unknown if it explains the true existence of the observable universe.
Therefore, the parallel universe theory which is basically an off shoot of big bang theory compromises it credibility. And thus it remains unknown if there exist many or how many parallel universes in the space and beyond. | https://www.hardhour.com/what-is-a-parallel-universe/ |
- Stephen Hawking is named as coauthor on a paper submitted March 4 – 10 days before he died.
- It sets out a way of testing whether other universes are real.
- Its mathematical theories could be tested with a deep-space probe.
Stephen Hawking submitted his final scientific paper just a week and a half before he died, and it lays the theoretical groundwork for discovering a parallel universe.
Hawking, who died Wednesday at 76, was coauthor to a mathematical paper that seeks proof of the “multiverse” theory, which posits the existence of many universes other than our own.
The paper, called “A Smooth Exit from Eternal Inflation,” had its latest revisions approved March 4, 10 days before Hawking’s death.
According to the Sunday Times newspaper, the paper is due to be published by an unnamed “leading journal” after a review is complete.
ArXiv.org, the Cornell University website that tracks scientific papers before they are published, has a record of the paper including the March 2018 update.
According to The Sunday Times, the paper sets out the mathematics necessary for a deep-space probe to collect evidence that might prove that other universes exist.
The highly theoretical work posits that evidence of the multiverse should be measurable in background radiation dating to the beginning of time. This in turn could be measured by a deep-space probe with the right sensors.
Thomas Hertog, a physics professor who coauthored the paper with Hawking, said the paper aimed “to transform the idea of a multiverse into a testable scientific framework.”
Hertog, who works at KU Leuven University in Belgium, told The Sunday Times he met with Hawking in person to get final approval before submitting the paper.
The newspaper said that if such proof were ever found, it would make the scientists behind it likely candidates for a Nobel Prize.
However, since Nobel Prizes cannot be awarded posthumously, Hawking would be ineligible to receive it.
Business Insider Emails & Alerts
Site highlights each day to your inbox.
Follow Business Insider Australia on Facebook, Twitter, LinkedIn, and Instagram. | https://www.businessinsider.com.au/stephen-hawking-paper-from-just-before-he-died-could-find-new-universe-2018-3 |
The Multiverse and parallel universes: a theme that captivates our intellectual and aesthetic curiosities ever since we can remember. Philosophers, mathematicians, physicists, thinkers, artists, astronomists, and other experts have tackled the theme from numerous aspects and methods but still remains to be an ever so captivating topic. BLOOP 2015 invites us to reflect on Multiverse -Parallel Universe- theories. Do parallel universes exist? What are they like? How can we ever interact with them? Where are we? Is the deja vu a sign of a parallel universe? Is there really an infinite set of “present”s? Artists from around the world will build aesthetics questioning and exploring the existence of other simultaneous realities. | http://www.biokip.com/bloop-festival-2015/bloop/bloop-archived/bloop-international-art-festival-ibiza |
Today I heard on the radio that the Super Collider has discovered mini black holes. They said that this discovery is proof of the existence of Parallel Universes and that the "Big Bang" never happened.I don't understand this and why this discovery means that Parallel Universe exist ? Does anyone know ?
-
1$\begingroup$ "Super Collider" - Which particle collider/accelerator/detector are you referring to? Also, how could mini black holes prove the existence of parallel universes or disprove the Big Bang theory? $\endgroup$ – HDE 226868♦ Mar 28 '15 at 23:16
-
1$\begingroup$ I believe he was referring to an experiment conducted over the weekend at CERN. Though it may be better to limit interpretations of the results to scientists rather than media reports. This may also be more fitting for physics. $\endgroup$ – Mitch Goshorn Mar 29 '15 at 1:02
-
2$\begingroup$ I don't have much of an answer for you, but here is a source of information that most people seem to be referencing. In the end it seems that the media is trying to make money, as usual, by making a bigger deal out of something than it should be. sciencedirect.com/science/article/pii/S0370269315001562 $\endgroup$ – GingerBeard Mar 29 '15 at 4:47
-
$\begingroup$ @HDE 226868 I just reported what I heard on the radio so it would take a Scientist to be able to answer your questions. $\endgroup$ – Peter U Mar 29 '15 at 20:22
-
2$\begingroup$ Superconducting supercollider was the big US accelerator project for the new millenium, until it was cancelled in 1993. en.wikipedia.org/wiki/Superconducting_Super_Collider Originally designed to be 5 times more powerful than Europe's current LHC, the project is now just a large, expensive hole in the ground in Texas. Journalists like the term "supercollider", so you still hear it used. $\endgroup$ – Wayfaring Stranger Mar 29 '15 at 22:17
Your radio information source is wrong. The Large Hadron Collider has not discovered mini black holes. When it comes back on line this spring, the LHC will begin looking for mini black holes: Large Hadron Collider Could Prove the Existence of Star Trek's Parallel Universe
Journalists do like their headlines, but the gist is this:
When the Large Hadron Collider is brought back online in the spring, researchers will be looking for the existence of mini black holes. These mini black holes would lend support to string theory, which posits that different dimensions and parallel universes are possible.
No miniholes yet, maybe never.
-
$\begingroup$ Yes, but why would the discovery of "Mini Black Holes" mean that Parallel Universes exist and bring into doubt the "Big Bang" theory ? Actually this theory makes more sense to me because with the Big Bang theory ,I could never understand how all the matter in the Universe, could start out in something the size of a grapefruit ? Also how would they determine that a Mini black Hole has formed, it can't be seen , so what gives it away ? $\endgroup$ – Peter U Mar 30 '15 at 16:18
In 2007, Nobel laureate Steven Weinberg suggested that if the multiverse existed, "the hope of finding a rational explanation for the precise values of quark masses and other constants of the standard model that we observe in our Big Bang is doomed, for their values would be an accident of the particular part of the multiverse in which we live. | https://astronomy.stackexchange.com/questions/10297/proof-that-parallel-universes-exist |
In January 2021, IDO Incorporated engaged with this long-term corporate client on an administrative area design project. This portion of the project was part of a larger lab consolidation project where IDO used a holistic approach to provide an all-inclusive design package for the client.
The holistic approach taken through the duration of the project included: specialized move management of moving the occupants out of the space(s), coordinating existing furniture teardown, specifying finishes and furniture, coordinating the FF&E installation, and moving the occupants back into the space(s).
The project posed a design challenge for the team. The new administrative footprint shrunk to less than half of what it was previously to allow for new laboratory space. The IDO team worked diligently to make the best use of available space by providing a collection of workstations, collaboration areas, meeting rooms, and individual focus booths for the end users.
As for the interior design, this project established the use of a new standard palette for this client. Cooler accent colors with dark and light neutrals were developed to play into the overall mood of this area. Wood was a highlighted finish and used on a ceiling architectural design feature, accent walls, and various finishes throughout to bring a balance of natural warmth into the spirit of this new palette. The inspiration for this project was monochromatic with green, blue, and purple accent fabric and other accent pieces.
The project was completed in December 2022. | https://idoincorporated.com/projects/corporate-client-7/ |
As we move into a new phase of supporting a safe return to work, employers are faced with the challenge of adopting preventive measures to help achieving a safe and healthy workplace and contribute to mitigate the transmission risk of COVID-19. So, if you are a business owner and you need to reopen your office in safe and healthy conditions, or a technician and have been commissioned by your employer to prepare the office premises for reopening, then you’ve landed on the right page!
In this focus article I’ll provide you some useful non-binding guidelines to help employers and workers to stay safe and healthy in a working environment and a 3D project of a workplace with anti-COVID-19 objects. The model has been created with Edificius and it is ready-to-use as a reference for your business.
Download the free trial version of Edificius in order to open the included 3D project and follow this step-by-step guide.
Let’s take a look at the results of this project and this render created using Edificius. As you can see, it shows how office spaces have been rearranged while taking into account the measures to help COVID-19 contagion control.
In the first part of this article, we’ll focus on the measures that employers should consider when planning to resume work and bring back employees to the workplace premises. In the second part, we have some technical information useful for architects, engineers and surveyors in charge of taking care of the safety measures to be implemented in an office.
Employer obligations
The main employer’s obligations are to:
- adequately inform, train and educate employees with regard to the proper behaviors to have in a workplace;
- identify possible hazards in the office;
- adopt suitable anti-contagion measures.
Covid-19 office risk assessment
The assessment of the infection transmission risk is a responsibility of the employer.
In order to proceed with the prevention and containment of the risk of transmission of the SARS-CoV-2 virus, you need to analyze the ways in which the risk of infection can spread in your work environment. In particular considering:
- the office spaces arrangements and regular work practices;
- a qualitative analysis of the possibility of transmission as a result of contact between workers, in relation to workplace parameters (environments, organization, tasks, working methods, etc.).
The analysis is an essential step before adopting the right risk mitigation measures and must include:
- work organization analysis;
- identification of activities that can be performed with remote working;
- workers division into groups;
- identification of walking paths for groups of workers;
- classification of places;
- analysis of the layout of the classified places;
- identification of staff operating externally;
- review of transportations used by workers;
- identification of contracted work activities;
- secondary risk analysis;
- review of emergency plans and procedures.
Prevention and protection measures
Based on the specific results of the analyzes carried out, you can choose the most appropriate organizational, procedural and technical measures, which include:
- measurement of body temperature;
- supply of protection devices;
- use of information signs and brochures;
- installation of hand sanitization points;
- personal hygiene precautions;
- training, information and education courses for worker;
- collection of potentially infected wast;
- ventilation/air exchange;
- organization of the corporate layout in order to encourage interpersonal distancing;
- common areas management;
- reorganization of working hours;
- premises access and exit control;
- access control for external suppliers;
- travel and meeting management;
- cleaning and sanitizing;
- business organization;
- health surveillance and protection of vulnerable workers;
- measures to prevent epidemic outbreaks;
- management of a symptomatic person;
- regulatory protocol update.
How to rearrange office spaces
To review the functional distribution of an office and ensure contagion risk prevention, you need to:
- manage entry and exit routes: if possible, provide for a differentiation of the routes in order to avoid the intersection of movements and flows, the identification of multiple access points to the building, an automatic door closing and opening system , removal of of turnstiles and all the measures you believe may be useful to ensure interpersonal distancing of at least 1 meter and to avoid queues and gatherings;
- set up an area for measuring body temperature: depending on the possibilities and space available, provide a special room or obtain a corner, possibly near the entrance, for measuring the temperature. You will need to arrange a table with measuring device, a pack of disposable gloves, a dispenser to sanitize hands and a pedal bin for waste collection;
- arrange points for hand sanitization: near the entrance and common areas, provide areas where workers can sanitize their hands. Remember that the areas identified must not hinder paths and generate queues or gatherings;
- review workstations organization: in open-plan offices or in the case of multiple desks in a single environment, you should make sure that workers are at least one meter away even when they are sitting at their workplace. You can also guarantee the proper spacing with a chessboard-style seating planning and reducing the number of people in the same area;
- review the common areas layout: face-to-face meetings are to be avoided and should be replaced with teleconferences, etc. When this cannot be avoided, you must organize the space in order to ensure proper people distancing and adequate air exchange in the rooms. Whenever possible, it is best to avoid contact with door handles and common objects. Meeting rooms if not used for traditional purposes, can be set up as isolation rooms for potentially infected employees or to create additional safe work stations;
- manage internal routes: you need to identify the most frequent internal routes that are used to reach common areas, workstations, bathrooms, canteen, etc. You can establish dedicated paths to ensure interpersonal distancing and any interference between the flows. You can also provide a routes plan to be displayed in visible places to workers;
- organize pertinent spaces: remember not to neglect the reorganization of the outdoor spaces and parking areas pertaining to the office;
- use protective barriers: if you do not have the possibility to space the workstations, or as a further preventive measure, provide plexiglass (or similar materials) protective barriers that can be easily sanitized;
- ensure the functioning of air exchange systems: make sure that in all the environments there is adequate air exchange, through natural or mechanical ventilation. If technically possible, it avoids the recirculation of air in the systems and the use of air-jet dryers that favor the dispersion of droplets and the circulation of air jets and provides for filters maintenance
- provide some space for the isolation of potentially infected people: define, where possible, a place dedicated to the isolation and temporary hospitalization of people who experience the COVID-19 symptoms during work. You can also set up the space in rooms not used in this phase, such as meeting rooms, auditoriums, waiting rooms, etc .;
- arrange a space for storing personal items next to the office entrance. Remember that the space must be large enough to avoid close gathering of people and contact with objects belonging to different people and the furnishings must be easily washable and sanitized.
Finally, it is always advisable to ensure that all measures of prevention of contagion are also accessible by the disabled (height of sanitizing devices, visibility of information signs, safe access to horizontal and vertical connections, etc.).
In addition to these general provisions, remember that any design solution, even the most creative ones, should be adopted and comply with the analyzed measures.
Case study: office reopening project
Let’s now take a look at a practical case study that you could use as a reference if you need to reopen your office, professional studio or small/medium-sized business.
Back to the office: the surveyed stage
This office project example has a rectangular shape with access from the main road.
The entrance consists of a small lobby that works as a reception area and also protects the working space from the outside. In the single open plan 16 workstations are mounted in blocks of 4, which are arranged on the long side orthogonally to the large windows that are shielded by a brise soleil systems. Workstations are arranged so to allow employees to have sufficient space to move around.
Three desks with printer separate workstations from toilets and the coffee break room. On the opposite wall there are two copiers and shelves. Finally, on one of the short department sides, there are a meeting room in glazed wall and the director’s office.
Office reopening plan
In compliance with the guidelines provided for the mitigation of the COVID-19 contagion risk, as mentioned in detail in the previous paragraph, we have created a safe work environment following some precautions as described below.
Entrance area
From the front door, the path for employees is highlighted on the ground by means of a 10 cm wide green strip, useful for avoiding movements/flows crossing and for ensuring interpersonal distances of at least 1 m.
In the same quite large space, there are lockers for the storage of personal items, the point for measuring body temperature and a table on which to place the package of disposable gloves, the dispenser for sanitizing the hands and the waste bin pedal. Signs showing the main rules to be followed are displayed near the entrance door.
Open-plan office
Past the entrance there is a small room, made with plasterboard walls, for the isolation of potentially infected people. The paths on the ground lead to the workstations grouped in blocks of 4 units and with the possibility of being spaced further apart.
The minimum distance of 1 m between the employees was verified by tracing, in plan view, a 75 cm radius circle that defines the protected area.
As a reference, we have considered a radius of 75 cm, also considering the physical size of the person, estimated at around 50 cm. By adding the distances between two people, we will obtain a net interpersonal distance of 1 meter.
The small dividing panels between the desks have been replaced and also integrated with 80 cm high plexiglass panels.
From the single position you can reach the bathrooms and the room for coffee breaks, following the paths on the ground and avoiding the intersection of flows.
Toilets
Two dispensers with hand sanitizer have been installed next to the entrance to the bathroom and the coffee break room.
The sliding door to access the bathrooms has been automated, to avoid contact with the handle. Electric hand dryers have been replaced with paper towels.
Coffee break room
In this room, the small sofa has been removed to increase the distancing space for employees, however limited to a maximum number of 3 people.
Illustration signs have been placed at the entrance with the measures to be taken by workers.
Director’s office
In order to respect safety distances, a desk chair and the two armchairs next to the sofa have been removed.
Meeting room
The original 6 seats in the meeting room have been reduced and a panel has been placed on the central table. A large screen has been installed for videoconferencing meetings for employees and with external contacts.
Small signs with the same color and graphics, indicating the risk mitigation measures to be respected, have been placed in all environments.
Supporting material ready for free download
Download all the material from here. You’ll find all the material useful to adapt your office with the COVID-19 anti-contagion risk measures. | https://biblus.accasoftware.com/en/resuming-work-after-the-lockdown-workplace-safety-guidelines/ |
A Guide to Returning to the Workplace
We can all agree that we won’t be returning to the same office situation we left in March. With a host of conflicting advice around personal hygiene, deep cleaning of spaces and keeping a bit more physical distance between us, returning to the office can seem a bit daunting.
Given that there won’t be a one-size-fits all solution for everyone, it’s time to start considering what options are available to ensure we are able to start bringing people back into the office (if that is indeed the best strategy for your business). In this article, we’ve teamed up with Work.Life to explore some of these options to help give you return to the workplace safely and efficiently.
Before you read on …
With the help of Work.Life, we’ve also created a handy downloadable document for you to refer back to. Please download the pdf here!
Rotating employees
Rotation plans and a shift system are obvious first considerations. Whether it be creating morning and afternoon shifts or splitting the organisation into teams to allow for a set group to work certain days, this will help to prevent cross-contamination between the groups.
However, it would be a mistake to just apply this as a blanket approach across the team. Keep in mind a digital-first policy (i.e. prioritising home working when possible) and consider who actually needs to be in the office.
To work out who needs to be in the office, start by rolling out individual assessments. The assessment criteria may include job function, transportation risks, health and age, wellbeing and facilities at home. Additionally, ask whether your employees want to return to the office. Some might be more cautious than others, so it’s always good to check on how they are feeling about the return.
Once you have a better understanding of your employees’ needs and how they are feeling, you should also consider implementing new office technology and in-home working solutions to ensure connectivity between team members.
Reconfiguring your space
Once you have reviewed the needs of your team and reflected on what is required for your business to operate, it’s time to consider the role your space plays in how you move forward. We know the guidance is to keep a ‘1m plus’ distance wherever possible, wear face masks on public transport, and ensure we keep up the high levels of personal hygiene. But what impact does that have on an office?
As teams start to return to the office, you must encourage them to keep their distance. Where possible, you should consider some level of reconfiguration of furniture. One option is to rearrange certain desks in an alternating pattern so that team members aren’t sitting directly opposite or directly adjacent to each other.
As you move forward, it will also be worth reviewing the overarching desk density strategy of the business. There is some argument for reducing the number of desks in favour of larger collaboration settings. With social distancing now reduced to 1m plus, employees needn’t shout across at each other, so this is certainly a viable option.
One-way system
It’s also important to consider how team members move about the space. For example, you may need a one-way system or checkpoints to avoid congestion or unnecessary crossover of people as they move about. Even when in the office, it is worth considering joining larger online meetings virtually to avoid overcrowding of collaboration spaces and meeting rooms.
Protective screens
In addition to the reconfiguration, think about what other furniture solutions might help prepare your space for the return of team members. One great idea is to introduce protective screen solutions to stop the infection from spreading. You have already seen these on petrol station counters, so think the same kind of thing but on office workstations and reception desks.
Introducing a robust hygiene and cleaning process
Though restrictions are easing, we’re still vulnerable at this time, so it’s crucial that hygiene remains a priority. To avoid the spread of infection, one critical step is to limit the sharing of equipment between team members. For example, headsets for online chats, desk phones and even pens should not be shared. Sharing crockery and utensils should also be kept to a minimum and cleaned regularly. Posters and visual aids around the office will help to remind employees about best practice hygiene behaviours.
The second critical step is to introduce a robust hygiene and cleaning regime. Here are some key considerations:
- Ramping up the cleaning schedule – Your employees’ safety is a priority, and fear will quickly creep in if employers are not seen to have a clear, thorough and professional cleaning policy.
- Improving sanitation – Introduce hand sanitising stations around the office. These are inexpensive and will be essential in combating the spread of infection.
- Deep cleaning – Ensure regular and professional deep cleans of the entire office. Additionally, regularly disinfect shared areas and high traffic zones throughout the day.
- Ensuring individuals are cleaning their desks – Before and after use, make sure employees are being diligent by keeping their workstations clean. A clear-desk policy will help employees to keep on top
of this.
Setting out clear policies for hygiene and cleaning may even help you to improve hygiene beyond the re-entry stage and reduce the number of employees taking sick leave in the future.
For more important health and safety advice, take a look at the latest government guidelines here.
Decommissioning high-risk areas
In order to adhere to social distancing guidelines and reduce the number of hot spots (areas where team members will naturally congregate or become congested), you may need to reconfigure your space or de-commission high-risk areas in the office. Hot spots can include tea points, print rooms, bathrooms, meeting
rooms and collaboration spaces – but we aren’t suggesting you de-commission your bathrooms!
Potential high-risk zones include hot desks and agile areas, so these should be at the top of your decommissioning list. Consider replacing ‘hot desks’ with ‘dedicated desks’, where each team member books or is allocated their own desk for the day. After they have left for the day, the desk can be cleaned thoroughly.
Additionally, when it comes to kitchen areas, consider whether you can set up smaller tea and coffee points that will help reduce the congestion of people in one area. You may also want to implement a one-way traffic system to help with the movement around the office.
What does the future hold?
There is no shortage of discussion around what the future of the office will be. Like many others, we have also been talking about how the office has changed over the last few decades and what it will look like in years to come.
The biggest change we are expecting to see is a reduction in physical presence in the de-densified office. One reason for this is that businesses are likely to be met with more requests from employees to work from home, so don’t be surprised if agile working becomes more commonplace.
If you are unfamiliar with agile working, it essentially gives employees the freedom and flexibility to work when and how they choose, whether that be in different areas of an office or remotely. With a greater demand for a better work-life balance looking likely after months of working from home, agile working is an ideal solution.
Given the pandemic, we may also see businesses adopt a robust cleaning schedule. With this in place, booking systems could be required for workstations, booths or meeting rooms. Again, this will likely reduce the number of people in the office at one time and encourage businesses to transition into a more agile way of working. | https://wearetwo.com/a-guide-to-returning-to-the-workplace/ |
Sound masking technology coupled with thoughtful design mitigates noise at GSA headquarters at 1800 F Street NW, Washington, D.C. Image © Gensler
How do you create a work environment that is quiet for individual focus work yet promotes collaboration?
The General Services Administration (GSA) is the agency responsible for real estate for the federal government. In a recent renovation of its headquarters at 1800 F Street in Washington, D.C., GSA saw an opportunity to create a more dynamic work environment. They knew the renovation could also be a powerful pilot project to test new ideas and demonstrate new strategies to the federal agencies they serve.
GSA administrators desired a more open and energetic workspace reflective of GSA’s sense of transparency and shared organizational culture. So they began exploring shifting towards an open-plan office to support their already highly mobile employees and encourage more collaboration between functions. The existing layout employed a series of cubicles within enclosed suites, whose tall partitions separated employees from one another and impeded serendipitous collaboration between coworkers. They weighed the ramifications of a more open and transparent office layout, hoping such a transformation would encourage workers to become even more mobile and to collaborate with colleagues on a more regular basis.
What worried GSA employees was fear of excess noise and distraction. They didn’t want an open office plan that would detract from their ability to think and concentrate when they worked independently. This fear of distracting background noise is a common dread associated with open office plans. There’s no shortage of articles and reports stereotyping open offices as noise-filled environments which inhibit productivity. Some of these reports make working in an open office seem akin to setting up a desk in the middle of a crowded airport terminal! The truth about noise and open offices is not as dramatic. Creating a workplace that is open, fluid, and relatively distraction free is well within the realm of possibility, as GSA employees would soon learn.
After considering various planning options, GSA boldly decided to move forward and break the long established mold of government workplaces. They created an environment that is open, transparent, and flexible, but also efficient. This meant fewer walls, lower workstation panels, glass walls replacing hard walls, and most employees not having a dedicated desk. But from the onset, GSA administrators made clear the priority to mitigate excess noise in the new headquarters. For GSA, it was a matter of enabling employees to effectively perform tasks deemed critical to their everyday work.
Prior to design, Gensler had profiled the GSA headquarters’ functions using the firm’s propriety Workplace Performance Index (WPI.) as well as using GSA’s workstyle survey findings. According to the WPI data we collected, GSA employees spend 52 percent of their time in focus work which requires concentration. GSA employees also rated focus work as the most critical work activity to their individual job performance. Employees performed the vast majority of this focus work (89 percent) at their workstations or in their offices. Complaints of noise distractions in the old layout were common; employees reported it hard to concentrate when others were having conversations around them. Most employees kept their doors open to remain accessible to one another, but noisy hallways lead to even more distractions. GSA employees told us they wanted a workspace which could augment concentration rather than detract from it. They told us fewer everyday distractions might enhance the quality and quantity of their work by as much as 38 percent.
Mitigating noise distraction requires the combination of smart design and planning strategies and the targeted use of sound masking technology. In the GSA headquarters, noisy public functions and the shared conference center were zoned separately from the quieter work areas for each sub-agency. Each floor was acoustically zoned to cluster noisy activities such as coffee and conference rooms together and separate them from zones where people are working in small teams or as individuals. Great care was taken to locate functions dealing with confidential or sensitive information in the enclosed historic zones and away from major traffic patterns. Enclosed focus rooms and small huddle rooms for two to three people were located in close proximity to open plan areas so as to provide alternate places to take a phone call, work without distraction, or meet with others. These enclosed rooms were then located strategically to block and/or absorb sound in the open office areas.
To revamp GSA’s headquarters into a high performing acoustical workplace, Gensler partnered with ADI Workplace Acoustics. According to the company’s founder Steve Johnson, ADI specializes in “speech privacy in the workplace.” In other words, ADI uses sound masking to cover unwanted and distracting noises. This approach creates workplaces where the acoustics enable focus. As Steve explains, sound masking systems fill a workspace with a soft background sound that covers the voices of neighboring conversations. The “creepy quiet” of most offices actually distracts workers and allows conversations to be heard at great distances. The masking sound reduces individual worker’s radius of distraction: the distance at which sounds made by others cause a disturbance. Keeping the radius of distraction to a minimum allows conversations conducted within one collaborative group to have less impact on nearby workstations. This enables workers to focus and collaborate within the same space.
Design performance can be measured. To achieve the perfect level of ambient noise within the GSA headquarters, Steve measured the radius of distraction from various workstations before and after the addition of sound masking technology. Prior to the renovation, federal workers could hear distracting sounds at distances of 50 feet and greater. Post renovation, the intelligibility of other people’s speech fades at a distance of only 15 feet. This dramatic improvement has allowed GSA employees to not only focus better, but feel that they can talk without distracting their neighbors. Striking this balance to enable both focus and collaboration is a crucial and tangible measure of workplace effectiveness.
But the end-users of the space are the most important gauges of design performance. I recently ran into one of the GSA federal workers who told me a terrific story. He said, “Prior to moving back into 1800F, I was absolutely convinced that I wasn’t going to like the renovated space. It seemed too open and promoted too much mobility. But by the end of the first week, I was hooked on a new way of working. I love being able to get up, and move to a zone when I want to focus quietly and not be disturbed, and other times, sit in a more collaborative zone when I want the energy and buzz of my team around me. I was convinced that I wasn’t going to like the new space, but now I wouldn’t want to work any other way!”
GSA now has an open office plan that nudges employees to collaborate and work with one another while simultaneously giving each person the necessary privacy to work on their individual tasks when necessary. In GSA’s quest for more openness, employees actually gained more acoustical privacy in the process. That is the best of both worlds.
Note: This is the second in a series of blogs about GSA’s headquarters at 1800F Street. The first blog can be read here: “GSA Breaks the Mold for Government Workplace”. This project will be presented at the CoreNet Global Summit in DC in October, 2014.
|
|
Janet Pogue is a Principal in Gensler’s Washington, D.C. office. She led the firm’s recent 2013 Workplace Survey research and is a frequent writer and speaker on the critical issues affecting the design of high performing work environments. Contact her at [email protected].
|
|
Steve Johnson is the founder of ADI Workplace Acoustics. He uses his unique background in acoustics, audio and construction to provide high performing acoustical workplaces. Steve regularly presents a CEU accredited educational program to teach design professionals the key steps to building a successful acoustical workplace. Contact him at [email protected]. | http://www.gensleron.com/work/2014/4/17/best-of-both-worlds-quiet-and-collaboration-at-gsas-headquar.html?printerFriendly=true |
- As monotonous as this pandemic has felt from day to day, it’s been a major disruptor that has already spurred big shifts in all facets of life. Particularly in our workplaces.
- The post-pandemic world invites designers to push the envelope on Agile Design principles – creating positive change for employers and employees alike.
- In the future, spatial requirements will change throughout the day and an effective office must accommodate the full gamut.
This article was written by Inger Bartlett and was originally published in Work Design Magazine.
Change – it’s the one thing you can count on in life and I’m sure we will agree that the last year has brought more than its share. Even as the situation finally improves there is no end in sight to the adjusting, transitioning, transforming and yes, that dreaded “pivoting.” As monotonous as this pandemic has felt from day to day, it’s been a major disruptor that has already spurred big shifts in all facets of life. Particularly in our workplaces.
We can still only guess at how COVID-19 will impact office design in the long term, but the most important lesson to take from the past 15 months is that adaptability is even more crucial than ever. Our workspaces should be ready for anything and everything – from fluctuating needs and preferences for remote work, to future pandemics.
The good news is that we’ve actually been building more and more flexibility into our workspaces for years. All signs are indicating that we can expect future workspaces to look a lot like the Agile Offices most of us were already working in – they’ll just be even more adaptable, ready to transform on a dime.
Previously in this series of articles we talked about how the balance of space at the office will shift. More workers will have flexibility in where they work, sticking to the home office for heads-down tasks and heading to the office when they need to collaborate with colleagues and clients. Collaboration space will take over where we no longer need as many desks.
When employees do make purpose-driven visits to the office, they will have a number of tasks to take care of in those hours. Meeting with a client, brainstorming with colleagues, reviewing work, checking in with superiors. And if spending a whole day on site, they will likely need some desk time to work alone. Their spatial requirements will change throughout the day and an effective office must accommodate the full gamut.
With so much flexibility required, walls become a liability. Open floorplans will be more appealing than ever. Designers will frequently specify temporary and modular solutions – such as sliding or retractable wall systems, moveable partitions and even furniture such as sofas – to define space. We likely won’t use as many fixed elements or built-in furnishings. Even anchor pieces such as a reception desk might need to slide or roll out of the way to make more room for any number of activities.
You might think that with more workers staying home to deal with focused work, we’ve solved the acoustics issue – the most common complaint against our modern open offices. But with all that collaboration happening, we need to give even more consideration to buffering noise and minimizing auditory distractions. Multiple meetings will need to happen in in the same open space. And there will still be people doing individual work in the office – whether because they are there to spend part of the day in meetings, or to get away from distractions they may have at home.
Adding acoustic properties to those partitions we mentioned above – as well as to fixed walls, ceilings, and even furniture – is a simple solution to increase adaptability. Greenwall features also offer built-in acoustic properties, alongside inherent wellness benefits that will be invaluable in creating comfortable, inviting environments that workers want to return to, if only part of the time.
We’ll be moving around in our offices more often and using furniture that can move with us is a simple way to bake in flexibility. For instance: a high-backed sofa light enough for an employee to swing around, transforming a corner into a huddle space. Or a rolling whiteboard that helps get the creative juices flowing and then spins to share ideas with another group. How about an upholstered chair that you can drag out of a breakout zone at the end of a meeting to work on your laptop in a quiet corner? Let’s face it, we’ve all grown used to spending part of our work day curled up somewhere more inviting than a task chair and these types of furnishings also take some of the rigidity out of the office to create a more relaxed environment that fosters innovation and collaboration.
Even workstations should be more moveable in the future and I predict we will see some interesting innovation on the product front. With hybrid work, desktop computers will fall out of favour, as will those bulky wired phones – calls are already more likely to come in on your cell or via an app like Microsoft Teams. With more employees carrying laptops into the office, cable management is simple – a spot to plug in and recharge is all that’s needed. More mobile desking is a particularly useful proposition as companies may find they need to add, subtract, or otherwise reconfigure desks as they establish a WFH profile that best suits their individual needs.
Speaking of calls, those Zoom meetings aren’t going anywhere just because it’s safe to be in the same space. Virtual meetings will be a regular thing in the office, with any number of people on site for the call. Larger meeting spaces already typically feature AV capabilities, but some of those may need to be split into smaller spaces. Booths with room for one or two people and a laptop will not only support video conferencing, but can also double as a distraction-free space when a high level of concentration or privacy is required.
With staff members no longer working in close proximity each day, the ability to bring people together for all-staff meetings and events becomes essential to building a sense of community and corporate culture. When those larger groups need to convene there are three main spaces to consider: the office café, the main lobby and the Town Hall. These super-sized gatherings aren’t likely to be a frequent occurrence however, so the space should be designed to serve other purposes on a daily basis – ie. a reception lounge, a spot for employees to take a break or eat lunch. A room that expands by opening to an outdoor amenity such as a terrace would also be an appealing design feature. Or a central space annexed by smaller meeting rooms with retractable walls that open to form a single volume is a simple approach that can be employed in most floorplates.
More flexible furnishings come into play in these adaptable gathering areas. Think modular boardroom tables, reconfigurable sofas, and all kinds of elements that roll away when not needed – things that allow space for interesting things to happen, for new possibilities. And that’s what we’re all hoping to find in post-pandemic world, isn’t it? We’re all looking forward to a fresh start, to rebuilding in a better way.
The biggest change the pandemic has brought is in our thinking. We have a new understanding that being productive doesn’t mean being present. That flexibility truly does allow us to maximize potential and work more effectively. It’s a new attitude that is sure to result in more inclusive, diverse and accessible workspaces. And that’s a change we can all embrace. | https://allwork.space/2021/06/the-future-of-work-is-flexible/ |
German internet agency netzkern moved into an excellent office space at the end of 2012. Located in Wuppertal, the company has received the honors of being the city’s “Company of the Year”.
Netzkern’s new office space is housed in refurbished loft, which gives them right around 700sqm. All “Kernkräfte” (as netzkern employees call themselves) are located on a shared floor, with plenty of conference, training and working rooms offer ample space for individual time to focus. Meeting rooms and casual seating areas are located nearby staff workstations for easy access.
I like to see that the office also seems to get a good amount of natural light, and uses glass walls to allow it to pass uninterrupted. But perhaps my favorite area is the cafe – with its foosball table and spectator seating. | https://officesnapshots.com/2013/09/06/netzkern/ |
At Billing Brook we specialise in educating children with autism both in our autism specific provision which caters for pupils from 4-18 years of age (and in the main body of the school). Class sizes in our autism specific provision are between 6-8, all classes have a teacher and a number of teaching assistants depending of the needs of the pupils. Pupils will remain with their class and teacher throughout the school day.
All staff within the autism specific provision are extremely skilled in meeting the needs of our pupils and all have attended either internal or external autism specific training at various levels and this is continuously updated.
We understand the difficulties that pupils with autism may experience and create an environment in which they can overcome these difficulties. Pupil’s have individual workstations, which allow for maximum concentration and which support pupils who may find the world a distracting or stressful place. Other facilities include, soft play and sensory rooms, life skills area, secure recreational areas and a bistro area in addition to a fully equipped food technology suite. | http://billingbrook.northants.sch.uk/Autism/ |
Because COVID-19 forced many companies to require their employees to work from home, organizations around the world are now thinking about how to bring employees back to their pre-pandemic offices while creating a safe and healthy work environment.
To do this successfully, workplace procedures and customs may look a little different now, and even in the future. Companies will need to carefully evaluate every component of their offices, from furniture arrangements, to shift schedules, to how employees get work supplies.
We’ve put together some tips that can help keep employees safe upon returning to the office. Here’s a few of our top recommendations, and you can also check out the CDC’s guidelines for additional information on how to safely prepare offices:
- Evaluate common areas and group spaces
Identify all the common areas in your office—such as meeting rooms, break rooms, kitchens, cafeterias or waiting areas—where employees or visitors could come in close contact with others. If employees must use these areas, install methods to physically separate employees in these areas, such as glass barriers, or visual cues like tape or signs that indicate 6-feet spacing. Stagger room use times and reduce meeting sizes as much as possible. It’s also a good idea to consider replacing high-touch community items, such as coffee pots, water machines, and snacks, with alternatives like disposable cups or pre-packaged food. Stagger start times and break times as much as possible to avoid overcrowding in common areas like entries and exits.
- Modify employee seating
Adjust employee desks, chairs and other furniture to maintain 6 feet between each employee’s workstation. Install transparent shields to separate employees seated next to each other and arrange reception area chairs or group meeting room chairs to be six feet apart. You may need to remove extra chairs or cover them with tape to ensure people don’t use them. If it’s not possible to maintain social distancing, consider grouping employees and having each group alternate which days of the week they come into the office.
- Take advantage of automation for supply distribution
Automated supply distribution is an effective way to ensure safe, contactless distribution of essential supplies, such as employee laptops, IT peripherals or other industry-specific supplies like PPE equipment. For example, IVM’s automated smart lockers allow for social distancing in the workplace by taking out unnecessary human interaction—employees can quickly get the tools they need from a machine, without talking face-to-face to other employees or passing equipment from one set of hands to another.
- Develop a plan for regular cleaning of surfaces
You should plan to ensure surfaces within the office are cleaned on a regular basis. Routinely clean all frequently touched surfaces in the workplace, such as workstations, keyboards, phones, doorknobs or printers. For disinfecting surfaces, use EPA-registered household disinfectants, or diluted household bleach solutions or alcohol solutions. Provide your employees with their own set of disinfecting wipes so they can regularly clean their workspaces or high touch areas and have plenty of hand sanitizer and antibacterial hand soap available throughout the office. Make sure employees and IT staff are also appropriately cleaning laptops, keyboards, and phones.
- Maximize air circulation within the office
Increase circulation of outdoor air as much as possible within the office. If you can safely open windows, entry ways, or patio doors around the office, doing so can maximize air circulation for employees and those who enter the office. For offices in high rise buildings or that do not have safe access to the outdoors, invest in large fans and discuss with building maintenance how to best circulate clean air.
At IVM, we’ve always been focused on transforming workplaces, and today’s environment is no different. Our business is all about providing safe and efficient supply distribution. We know companies across the globe continue to focus on the health and safety of their employees and ensuring they have what they need to do their jobs safely. If you have any questions or would like to have a conversation, please reach out to us! | https://ivminc.com/how-to-keep-employees-safe-as-companies-return-to-the-office/ |
Not only are companies whittling down their staffs to up their bottom lines; many are also providing their remaining employees with less room to work in.
"There is a real trend among companies to target the amount of rent as a percentage of revenue," says Jerry Gilboe, director of programming services at Cushman & Wakefield Inc. "As a result, a lot of firms are re-evaluating the size of offices and workstations, as well as examining who needs a private office and who does not."
For some New York companies, this closer scrutiny may stem from expensive long-term leases they may have signed in the late 1980s, when, for example, the average Class A midtown rent was as high as $40.66 per square foot, 15% more than it is today. These firms are now trying to either rent cheaper space, renegotiate their leases or consolidate, or they're staying put and subleasing portions of their space. Lou Switzer, chief executive of Switzer Group, a Manhattan interior design firm, estimates that office space needs have shrunk 20% to 25% over the past five years.
Most firms want to be as flexible as possible when downsizing occurs. Both the number of different office layouts and the overall size of an individual office are decreasing. Even within law firms, where prestige is linked to size, firms are now doing things like making two summer associates share one partner-sized office, and storing extra furniture when it is not in use.
Consolidating office space
Since Wilton, Conn-based Dun & Bradstreet Corp. began consolidating 13.3 million square feet in 1993 to its current 11 million, it has reduced its number of office layouts from eight to four. "It's easier to move people than walls," explains Mike Bell, director of corporate real estate for the firm.
Walls are tumbling down as well. More companies are using so-called open landscapes, design configurations that use paneled workstations and therefore limit the number of walled offices. These landscapes are not the high-walled mazes prevalent in the past; they use low-level panels that not only allow for more optimum lighting, but also permit employees to communicate. To cut down on the noise level, firms are investing in acoustical systems.
"We're installing a lot of masking sound systems that inject a neutral sound into the environment," explains John Lijewski, a principal at Perkins & Will, a Manhattan architecture firm.
But while businesses may spend a little on extras to make the limited space more comfortable, the glitz of the 1980s is gone. "Companies want to look lean and mean to everyone who visits the space," says Karen Dauler, manager of the interior division at Manhattan architects Kohn Pedersen Fox Associates. As a result, paint is replacing vinyl wall coverings, and utilitarian carpet tiles are replacing marble floors.
Superfluous conference rooms and filing rooms are also out. Conference rooms that once may have been empty for days at a time are now being shared and reserved by different departments. For example, Equitable Life Assurance Co., which is consolidating six offices into one at 1290 Sixth Ave., has designed an entire floor of meeting rooms; the rooms come equipped with folding walls so that either several smaller rooms or one large conference area can be created.
More efficient furniture
Many redesigned firms are also buying more efficient furniture, like P-shaped desks that let executives hold small meetings around them, forgoing the need for a separate conference room. There are also tables that can be wheeled from one workstation to another.
International Business Machine Corp.'s Cranford, N.J., facility, a consolidation of four offices, has a "Main Street" of file cabinets that cuts through the center; each employee gets assigned two drawers.
Some companies are making departments share filing rooms,
and maximizing the amount of cabinets in a room with compressed shelving. And other firms are cutting down on expensive space by sending nonessential files to an off-site storage facility that can deliver requested files within 24 hours.
Although some technology requires an increased amount of space, there are laptop computers, shared laser printers, and faxing through computer networks rather than large fax machines, all of which can help limit bulk. In addition, some work spaces get filled with as many people or functions as possible.
For example, secretaries now often work in teams for many bosses, and their areas now accommodate more people, more in-out boxes and more computer equipment. Even pantries that were used exclusively for coffee breaks now often house copying machines and printers.
However, with all the technological and cultural shifts behind the trend toward streamlined offices, some things are slow to change. Strategies like hoteling-making employees who often work off-site reserve offices on the days they need them-can work only for a limited number of firms. And although many claim to embrace philosophies like, "Who you are is not as important as what you do," most chief executives are reluctant to give up their giant corner offices.
"It's like that old joke, `Where does the 800-pound gorilla sleep?' " says Robert Ladau of Hillier Eggers Architects. The answer: "Wherever he wants." | https://www.crainsnewyork.com/article/19960422/SUB/604220737/takeout-commercial-real-estate-lean-design-form-follows-function-downsized-firms-adapt-to-trimmed-spaces |
The workplace is the homeland, a source of identity and inspiration, a room for interaction, but also a place to concentrate. In order to enable Unic employees to work in an efficient and effective manner, the company management introduced the strategic initiative Workplace2015+ in 2014. A concept of use was prepared and it will be implemented in the new office spaces of Unic Zürich in Altstetten as an example. The relocation will take place on 9-11 March 2016 – the preparations are at full tilt.
Sandro Dönni, Senior Consultant at Unic, who participated in workplace-related projects already for his previous employers, is the project manager of Workplace2015+ initiative. We have taken advantage of the calm before the relocation storm and asked him about the goals and cornerstones of the initiative.
Sandro, you have experienced various work environments: which factors influence work at Unic in particular?
Sandro Dönni: The primary component of Unic’s daily routine is project work. Various characteristics of our everyday work are a consequence of that.
“We enhance the success of our customers in e-Business.”
This is our mission. In order to offer the best service possible to our customers, our project teams are interdisciplinary - they are composed of employees from different organisational units. Content- and time-related considerations are very important in such a case, so it is a challenge and at the same time a success factor to bring various specialist perspectives together with view on the project goals. Many employees work simultaneously on multiple projects and therefore are members of different project teams. It is not always easy to reconcile various profiles in the project teams and to fulfil differing requirements. The project manager might find it essential that team members are immediately available, but the developer would like to work on an application with full concentration and without being disturbed. Because of project-oriented work, each day is different and it happens that work situations and constellations change within a single day. The dynamics of the project business causes a tension between individual work with full concentration and collaborative interaction, and such shifts often cannot be easily planned.
All these factors affect the productivity of each individual, the productivity of the team and as a consequence - of the whole company. At the same time they are responsible for both the burden and the satisfaction of our employees.
What goal underlies Workplace2015+ concept?
Unic launched Workplace2015+ initiative to provide optimum support to the employees in their activities. The following Vision Statement was prepared together with the company management:
- “Workplace 2015+” encompasses the future arrangement of our work environment, including the workplaces at Unic Branches.
- The goal is to create such a work environment where our employees can feel well and work productively. At the same time it will take into account differences in work contents as well as various modes of conduct and forms of work.
- Workplace 2015+ enables, on the one hand, individual work with full concentration and, on the other hand, collaboration (both on-site and distributed) between Unic employees but also with customers and partners.
- The facilities in the Branches facilitate interaction between people.
The Vision Statement is the basis for the arrangement of the office spaces. The space arrangement directly affects the organisation and the structures.
“Space influences our behaviour and our culture.” Sandro Dönni
That is why, first of all we analysed how we work today and what the needs of various roles are, together with their individual impacts on the workplace. This was the base for preparing a needs-oriented concept of use showing how we wish to work at Unic in the future.
What were the specific steps of the analysis?
We employed a participative process in order to accommodate the needs of various stakeholders. We checked what proved out, what the current problems were, and what developments should be anticipated. Great significance was attached to the inclusion of stakeholders and employees - it was reflected in the organisation of a sounding board and in the choice of measures for the analysis stage: plenty of analysis methods were applied so that we were able to identify as many requirements as possible:
- Online survey for all employees
- Qualitative focus groups with team leaders and project managers for discussions about various ideal zone concepts
- Individual interviews with key persons
- Open collection of ideas with the use of our internal collaboration tool - Yammer
Thanks to that, questions could be answered at the analysis stage, which permitted drawing grounded conclusions and preparing a targeted concept.
What are the cornerstones of the concept of use established as a result of the analysis?
The concept of use combines a zone concept with the open space arrangement. Based on various needs, the zone concept defines numerous areas: Apart from individual workplaces, there is a quiet zone (library) for work with full concentration without being disrupted, zones for interactive collaboration, telephone or mini meeting rooms for up to three people, an activity zone, an entrance and a food court zone, which can be used as a centre for interaction, a coffee or lunch together, and contact with customers. Moreover, the location of organisational units makes the paths between units and roles which have multiple points of contact as short as possible.
Great emphasis was put on creating many collaboration rooms so that employees are not exposed to excessive noise at their workplaces. The 27 collaboration rooms are arranged differently so that it is possible to choose an appropriate room depending on the context and preference. About every third room is suitable for 1-2 people and intended for ad hoc meetings or telephone conversations. This is why they do not have to be booked but they can be used spontaneously.
- Refuge rooms for unplanned spontaneous collaboration
- Project rooms that can be used by a project team for many weeks and can be booked
- Meeting rooms of various sizes - many of them are not arranged in a classical manner with tables and chairs but with high desks, chairs only, sofas, etc.
- The quiet room covers the library and is a place intended for quiet and concentration
- The activity room provides an opportunity to engage in games and sports
The new office spaces in Altstetten, which were not fully developed, made us free to implement this concept of use. Now we are excited about the daily routine after the relocation and we will check where the concept proves out every day and where adjustments are necessary. It was very important for us to incorporate such ergonomic factors in our plans as air, light, or noise, which are essential for health, satisfaction and efficiency.
What benefits for Unic are brought by the new Zürich Branch?
With the open space arrangement we facilitate open communication across organisational units and projects. 93% of the answers in the internal online survey reveal that this cross-communication will be increasingly more significant in the future. Through the use of a unique space in the new Branch we move closer together and thanks to the ample number of project rooms - also closer to our customers, who are always working with us.
The new Branch developed according to the new concept of use supports the employees in their respective tasks and in their individual work style, so that they can work efficiently with satisfaction and in a healthy environment. Owing to diverse work zones the employees can find a perfect work environment for any work situation and any specific task. At the same time the quality in the projects and the productivity of each person as well as of the team and organisational units are supported.
The large food court zone becomes a centre for informal and spontaneous interaction and “coffee conversations,” which facilitates interdisciplinary interaction independently of projects and organisational units. We regard this as an important factor of joint development and innovation in our work.
Hence, Workplace2015+ concept and the new Branch in Zürich constitute an investment in our employees but also in our customers: they benefit from satisfied, healthy and productive project partners, who take care of their needs and projects with full commitment. | https://www.unic.com/en/magazine/experts-blog/2016/with-work-zones-towards-an-efficient-and-satisfying-daily-routine-at-unic |
When approaching the task of designing a new office, we need to remember that designing is not synonymous with decorating. Designing should change employee behavior – not just change aesthetics.
If you examine all of the tasks that happen within a workplace – they should all boil down to one of the following: Focus, Team, and Share.
Focus is an individual work mode that occurs within a primary workspace that supports concentration and reduces interruptions.
2 .Share is a collaborative work mode that can occur in individual or group spaces and centers on the casual exchange of ideas with a mall number of colleagues.
3.Team is a group work mode related to specific work goals that occurs in formal and informal meeting spaces.
Optimize layout and location of workstations and offices to enhance visual access.
Create a variety of work activity zones to enhance chance encounters. | https://www.featherlitefurniture.com/blog/designing-ease-flow-people-ideas/ |
IEP-Legrand, presents an integrated solutions for supplying and fitting out workstations to the users in any kind of office buildings infrastructure and fitted with solutions that can be fully adjusted to create well organised and functional offices and save time and money during installation or future reconfiguration while still meet user’s connectivity and mobility needs.
Throughout this Workstation Solutions Guide, we presents all the solutions for supplying and fitting out workstations in any kind of building infrastructure: raised access floor, concrete floor, false ceiling and wall.
The “Case studies” section provides some workstation examples for reception areas, “open space” offices, meeting rooms, individual work spaces. The “Power and data distribution and connection” section presents all systems for routing power and data circuits.
The “More information” section shows all features and benefits, plus technical and implementation information on the cable management systems. | https://www.iep.com.my/product/workstation-solutions/ |
Shalem O, Sanjana NE, Hartenian E, Shi X, Scott DA, Mikkelsen TS, Heckl D, Ebert BL, Root DE, Doench JG, Zhang F. Genome-scale CRISPR-Cas9 knockout screening in human cells. Science. 2014 Jan 3;343(6166):84-7. doi: 10.1126/science.1247005.
As seen in the primary and review articles, there is a lot of functional applications for genome-scale CRISPR-Cas9 knockout (GeCKO). I see that both the articles were published relatively recently (summer 2014), but have there been anymore applications/techniques discovered since then that could either benefit the cell viability in cancer and pluripotent stem cells (as studied in the primary article), or therapeutics in relation to another genetic disease?
The researchers from the primary article used CRISPR/Cas9 to identify normal genes whose loss (gene was knocked out) is involved in resistance to a drug that fights cancer. This could potentially be used to design new anti-cancer drugs. Could the CRISPR/Cas9 system also be used in the opposite way, to specifically knock-out mutated genes in cancer cells that are critical for cancer proliferation, thus killing only cancer cells? What are the drawbacks of this method?
In the supplemental article, the authors explain several new applications in research, medicine and biotechnology, such as thorough monitoring of chromatin states or being able to 'rearrange the three-dimensional organization of the genome'. Knowing that the supplemental article was published after the primary article, could the authors utilize some of these new technological advances to further improve their GeCKO technique? Do these advantages allow for more thorough tests to be performed?
The potential to use CRISPR/Cas9 in human genetic modification seems like one major ultimate goal of its development. what are the ethical concerns if this becomes more viable? Could this be used for so-called "designer babies," and if so, what could it mean for future generations?
The second article says that this can be used to "correct harmful mutations in the context of human gene therapy". What are some of the kinks that need to be worked out before this is able to be used in humans, if any?
It states in the primary article that CRISPR-Cas9 can be used to target many different areas of the genome (promoters, enhancers, introns, and inter-genic regions). Based on its widespread applicability compared to RNAi technology, will this CRISPR technology become the norm soon? Is there a reason the RNAi technology might still be useful in the lab setting?
What is the specificity of the GeCKO system? Do you think it could be refined to introduce mutations at specific locations in a gene? What are the restrictions to this kind of specificity?
In the supplementary article, it talks about the phylogeny of the Cas9 gene and diversity of its orthologs. How does studying the diversity of this gene help to improve the current technology?
The primary article mentions that there are off target effects when using RNAi, do you know what these off-target effects may be? Likewise, are their downsides to using CRISPR that should be considered when choosing what experimental technique should be used to generate a knock-out?
The review article states, "Additionally, Cas9 could be harnessed for direct modification of somatic tissue, obviating the need for embryonic manipulation as well as enabling therapeutic use for gene therapy." I find this insanely interesting. Do you think it could be applied to all types of manipulation...i.e. designer babies (eye, hair color, height, interests, abilities, etc.)? And if so, is there ever concern for adverse effects resulting from this manipulating, or any manipulation in general?
The first article did a good job comparing RNAi to the new CRISPR technology. Do you think that RNAi will be replaced by CRISPR technology for producing knockouts, or do you think each will have its different uses in experiments in the future?
Although they show how accurate that this new CRISPR technology is as well as how extremely useful this could become in research and maybe even clinical settings, are there concerns with cost to use this sort of technology and even other drawbacks to using this?
It seems as though lentivirus administration is a great way to introduce the Cas9 system into cell lines. What sort of challenges will be faced when trying to transfer this over to full systems, such as when using Cas9 for gene therapy, in which the immune systems may interfere with administration?
The primary article states that they screened for genes whose loss is involved in resistance to vemurafenib. Why did they chose to study this specific therapeutic?
The review article discusses a double-nicking approach to improve CAS9 target recognition fidelity...are there any types of off-target modifications that this does not address? Additionally, are there other methods to minimize off-target effects?
Is there a specific pro that lentiviral delivery of the Cas9 has as a method of gene therapy as opposed to using an adenoviral or adeno-associated viral vector to deliver the Cas9?
Though the CRISPR-Cas9 system has clearly been a much better option than RNAi for inserting genes of interest for knockout, what are the drawbacks to this system. Can you compare the drawbacks of Lentiviral vectors as opposed to the alternative? | https://genetics564.weebly.com/blog-2015/feb-24th-high-throughput-genomics |
Researchers at Duke University have shown that a single systemic treatment using CRISPR genome editing technology can safely and stably correct a genetic disease—Duchenne muscular dystrophy (DMD)—for more than a year in mice, despite observed immune responses and alternative gene editing outcomes.
The study appears online on February 18 in the journal Nature Medicine.
In 2016, Charles Gersbach, the Rooney Family Associate Professor of Biomedical Engineering at Duke, published one of the first successful uses of CRISPR to treat an animal model of genetic disease with a strategy that has the potential to be translated to human therapy. Many additional examples have since been published, and several genome editing therapies targeting human diseases are currently in clinical trials, with more on the way.
Gersbach’s latest research focuses on a mouse model of DMD, which is caused by the body’s inability to produce dystrophin, a long protein chain that binds the interior of a muscle fiber to its surrounding support structure.
Dystrophin is encoded by a gene containing 79 protein-coding regions, called exons. If one or more exons are disrupted or deleted by an inherited mutation, the chain does not get built, causing muscle to slowly shred and deteriorate. Most patients are wheelchair-bound by age 10 and don’t live beyond their 20s or early 30s.
Gersbach has been working on potential genetic treatments for Duchenne since 2009. His lab was one of the first to begin focusing on CRISPR/Cas9, a modified version of a bacterial defense system that targets and slices apart the DNA of invading viruses. His approach uses CRISPR/Cas9 to snip out dystrophin exons around the genetic mutation, leaving the body’s natural DNA repair system to stitch the remaining gene back together to create a shortened—but functional—version of the dystrophin gene.
“As we continue to work to develop CRISPR-based genetic therapies, it is critical to test our assumptions and rigorously assess all aspects of this approach,” Gersbach said. “A goal of our experiments was to test some ideas being discussed in the field, which will help us understand the potential of CRISPR to treat genetic diseases in general and Duchenne muscular dystrophy in particular. This includes monitoring the long-term durability of the response in the face of potential immune responses against the bacterial Cas9 protein.”
The first eight-week study demonstrated that functional dystrophin was restored and muscle strength increased. It did not, however, explore the long-term durability of the treatment.
“It is widely believed that gene editing leads to permanent gene correction,” Gersbach said. “However, it’s important to explore theoretical possibilities that could undermine the effects of gene editing, such as losing treated cells or an immune response.”
The goal of the new study was to explore factors that could alter the long-term effects of CRISPR/Cas9-based gene editing.
Christopher Nelson, the post-doctoral fellow in Gersbach’s lab who led the work, administered a single dose of the CRISPR therapy intravenously to both adult and newborn mice carrying a defective dystrophin gene. Over the course of the following year, researchers measured how many muscle cells were successfully edited and what types of genetic alterations were made, as well as the generation of any immune response against the bacterial CRISPR protein, Cas9, which acts as the “scissors” that makes cuts to the genome.
Other studies have reported that the mouse immune system can mount a response to Cas9, which could potentially interfere with the benefit of CRISPR therapies. Several groups have also reported that some people have preexisting immunity to Cas9 proteins, likely because of previous exposure to the bacterial host.
“The good news is that even though we observed both antibody and T cell responses to Cas9, neither appeared to result in any toxicity in these mice,” said Nelson. “The response also did not prevent the therapy’s ability to successfully edit the dystrophin gene and produce long-term protein expression.”
The results also suggested approaches to address potential challenges, should they arise in the future. For example, the researchers observed that when two-day-old mice without fully developed immune systems were treated intravenously, no immune response was detected. The CRISPR genome editing remained stable and, in some cases, even strengthened over the course of a year. One could imagine delivering the therapy to infants as a method of circumventing or modulating an unwanted immune response.
Gersbach and Nelson acknowledge, however, that the mouse immune system often functions quite differently from the human immune system. And newborn screening for DMD is not currently widely performed; most Duchenne diagnoses occur when children are three to five years old. To address this challenge, Gersbach said suppressing the immune system during treatment may be a viable approach.
The researchers are also investigating potential strategies to restrict the expression or delivery of Cas9 to only the muscle cells for short durations, which may lessen immune detection.
“We were pleased to observe that all the mice were doing well a year after treatment, but our results show that there needs to be more focus on the immune response as we move toward larger animal models,” Nelson said.
Nelson and Gersbach have previously investigated the potential of off-target editing by CRISPR/Cas9 to unintentionally modify other sites in the genome and reported minimal activity at likely off-target sites. Other recent studies, however, have reported that CRISPR can sometimes make genetic edits at the correct site but not in the intended manner. For example, some studies have shown that CRISPR can cut out genetic sections much larger than intended or that pieces of DNA can embed at the site of the cut. These types of edits had previously been unreported in genome editing studies because the methods being used only detected the intended edit.
To comprehensively map all the edits occurring in the dystrophin gene, Nelson used a DNA sequencing approach that agnostically reports any type of edit. Surprisingly, there were many types of edits being made in addition to the intended removal of the targeted exon, including a high level of insertion of DNA sequences from the viral vector encoding the CRISPR/Cas9 system.
Depending on the type of tissue and the dosage of CRISPR delivered, as many as half of the on-target edits resulted in these alternative sequence changes. Although this result was surprising, the unintended sequence changes do not appear to impact the safety or efficacy of this CRISPR/Cas9 gene editing approach for DMD. | http://besthealtharticle.com/personal-health/single-crispr-treatment-provides-long-term-benefits-in-mice/ |
Various vaccines based on different types of RNA are now under development to fight emerging SARS-CoV-2 variants, bringing new hopes for stopping the long-standing pandemic.
09-28-2021Read More »
Regulating Cas Proteins: A Chemical Toolbox Aiding Genome Editing
Off-target effects remain a major concern in CRISPR-mediated immunotherapies. Read on and learn how a chemical toolbox can be developed in an effort to overcome this problem.
09-16-2021Read More »
Fight the Pandemic: Super Antibodies Coming to the Rescue
The development of numerous vaccines against SARS-CoV-2 worldwide within one year of the pandemic is unprecedented and undoubtedly a huge accomplishment in human history.
08-18-2021Read More »
Promoting Anti-cancer Immune Responses Using Lethally Irradiated Tumor Cells
The study reveals the capacity of lymphangiogenic vaccines to promote a potent anti-cancer T cell immunity, and suggests a new protective and therapeutic approach to fight solid tumors.
07-22-2021Read More »
Curbing Anti-drug Antibody Enthusiasm Through CAR- and TRuC-redirected Regulatory T Cells
Rana, Jyoti, et al. "CAR and TRuC redirected regulatory T cells differ in capacity to control adaptive immunity to FVIII." Molecular Therapy (2021).
06-09-2021Read More »
Unraveling Potential Hosts of SARS-CoV-2 with ACE2 Orthologs
Liu, Yinghui, et al. "Functional and genetic analysis of viral receptor ACE2 orthologs reveals a broad potential host range of SARS-CoV-2." Proceedings of the National Academy of Sciences 118.12 (2021).
05-06-2021Read More »
Production of Novel Monoclonal Antibodies Against the SERINC5 HIV-1 Restriction Factor
Molnar, Sebastian, et al. "Novel monoclonal antibodies to the SERINC5 HIV-1 restriction factor detect endogenous and virion-associated SERINC5." MAbs. Vol. 12. No. 1. Taylor & Francis, 2020.
03-23-2021Read More »
Base Editor Screens, a New Approach for Readily and Scalably Functionalizing Genetic Variants
Hanna, Ruth E., et al. "Massively parallel assessment of human variants with base editor screens." Cell 184.4 (2021): 1064-1080. | https://www.genscript.com/learning-center/research-digest?src=pullmenu |
Despite promising clinical results in a small subset of malignancies, therapies based on engineered chimeric antigen receptor and T-cell receptor T cells are associated with serious adverse events, including cytokine release syndrome and neurotoxicity. These toxicities are sometimes so severe that they significantly hinder the implementation of this therapeutic strategy. For a long time, existing preclinical models failed to predict severe toxicities seen in human clinical trials after engineered T-cell infusion. However, in recent years, there has been a concerted effort to develop models, including humanized mouse models, which can better recapitulate toxicities observed in patients. The Accelerating Development and Improving Access to CAR and TCR-engineered T cell therapy (T2EVOLVE) consortium is a public–private partnership directed at accelerating the preclinical development and increasing access to engineered T-cell therapy for patients with cancer. A key ambition in T2EVOLVE is to design new models and tools with higher predictive value for clinical safety and efficacy, in order to improve and accelerate the selection of lead T-cell products for clinical translation. Herein, we review existing preclinical models that are used to test the safety of engineered T cells. We will also highlight limitations of these models and propose potential measures to improve them.
- immunotherapy
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Introduction
Adoptive T-cell therapy, which relies on the infusion of tumor-reactive T cells that can recognize and kill malignant cells, has demonstrated remarkable efficacy in several advanced-stage cancers. This therapy requires primary human T cells to be genetically modified to express tumor-specific receptors that consist of either a T-cell receptor (TCR) or a chimeric antigen receptor (CAR). TCRs are heterodimeric glycoproteins composed of TCR-α and β chains associated with the CD3 complex, able to recognize target antigens in the context of a specific peptide–major histocompatibility complex (MHC). CARs, on the other hand, are synthetic receptors consisting of an MHC-independent antigen-binding moiety commonly derived from a tumor-specific monoclonal antibody, fused to an intracellular signaling region, mainly composed of the CD3ζ chain and costimulatory molecules derived from CD28 or 4-1BB, although other domains are currently being tested.1 Notwithstanding impressive clinical benefit in a small subset of malignancies, therapies based on engineered T cells are associated with potentially life-threatening toxicities. Importantly, preclinical models have mostly failed to predict these complications in humans, as they were primarily designed for testing efficacy at the time of the first toxicity observation in patients.
Here, we will review the main toxicities associated with engineered T-cell therapy and preclinical models currently used to study these adverse events. Recently, many efforts have been dedicated to the establishment of more predictive and reliable models. We will thus highlight the advantages, as well as the limitations, of current models and propose measures to have preclinical models fit for purpose with respect to engineered T-cell toxicity profiling.
Toxicities and preclinical models
Cytokine release syndrome (CRS) and neurotoxicity
One of the most common and potentially fatal immune-related adverse events of CD19 CAR T-cell therapy is CRS2–8 (figure 1). According to the American Society for Transplantation and Cellular Therapy(ASTCT) consensus grading system, CRS is described as an immune effector cell-associated supraphysiological response following any immune therapy, resulting in activation of endogenous or infused T cells, as well as other immune cells, that must include fever at the onset and may additionally include hypotension, capillary leak, and organ dysfunction.9 Recent studies have highlighted the key role of myeloid and endothelial cell activation in the propagation and worsening of the syndrome and have identified gasdermin E-mediated target cell pyroptosis as a primary trigger for macrophage activation.3 10 11 CRS is also the most common adverse event observed in patients with multiple myeloma (MM) receiving B cell maturation antigen(BCMA) CAR T cells.12 Patients receiving CAR T cells are closely monitored within the first 10 days after infusion for any sign of CRS (eg, fever >38°C). CRS management needs to follow a grading and risk-adapted approach. Low-grade CRS can be treated symptomatically (antipyretics and fluids), whereas patients developing CRS of grade 3 or 4 may be treated with vasopressors, tocilizumab (anti-interleukin (IL)-6 receptor antagonist), and/or low-dose, or if required, high-dose corticosteroids.13
Neurotoxicity, also known as immune effector cell-associated neurotoxicity syndrome (ICANS), has been reported in all CD19 CAR T clinical trials exhibiting a robust immune response,14 with more than 60% of patients experiencing toxic neurological effects (figure 1). While neurotoxicity has often been described to be associated with CRS,15 each toxicity can occur independently,16–18 with a grading now very well defined.9 ICANS is usually self-limiting but can necessitate admission to the intensive care unit and is rarely fatal.19 20 Clinical manifestations of neurotoxicity include confusion, language disturbance, fine motor skill deficits, encephalopathy, somnolence, dysphasia, aphasia, seizures, cerebral edema with coma, and death.16 18 21 Molecular mechanisms of ICANS include systemic inflammatory responses triggered by myeloid cells that activate endothelial cells and increase the permeability of the blood–brain barrier (BBB).16 Once the BBB becomes dysfunctional, the cerebrospinal fluid can be exposed to high concentrations of systemic cytokines and immune cells, which can result in brain vascular pericyte stress and secretion of endothelium-activating cytokines.22 Recently, CD19 CAR T cell-related ICANS has also been related to the recognition of CD19+ brain mural cells.23 ICANS has also been observed in patients treated with BCMA CAR T cells, even though its incidence appears to be more heterogenous among different clinical trials. As of now, most patients with MM experience mild and reversible ICANS, with no reported deaths due to this adverse event.12 The standard of care for neurotoxicity includes supportive care and corticosteroids to induce immunosuppression.16 Treatment of neurotoxicity may also include inhibition of IL-6 with or without corticosteroid administration,22 but this appears more effective for CRS.16 17 Additional treatment strategies for CRS and neurotoxicity include targeting granulocyte macrophage colony-stimulating factor (GM-CSF), IL-1, tumor necrosis factor alpha, JAK/STAT, ITK, T-cell activation switches, and endothelial cells.16
Notably, Tmunity Therapeutics has recently reported two deaths from neurotoxicity during a clinical trial testing Prostate-specific membrane antigen (PSMA)-targeting CAR T cells armored with a dominant negative transforming growth factor beta (TGF-β) receptor in prostate cancer. These events were associated with a unique cytokine profile and massive macrophage activation which did not respond to tocilizumab.24 Similarly, in a clinical trial in patients with melanoma, the administration of tumor-infiltrating lymphocytes armored with an inducible IL-12 gene mediated significant antitumor responses but was accompanied by severe IL-12-related CRS-like toxicity that limited further development of the approach.25 On one hand, these clinical observations reveal the need for additional mechanistic studies to inform the rational design of therapeutic interventions for solid tumors and, on the other, highlight the complexity that armoring of CAR T cells can add to the toxicity assessment.
Models for CRS and neurotoxicity
Biomarkers
Biomarkers are biological characteristics that objectively measure and evaluate biological or pathogenic processes and/or indicators of pharmacological responses to a therapeutic intervention26 and are an essential component of preclinical safety assessment of CAR T cells (figure 2). In particular, the identification of predictive biomarkers may be crucial for the selection of patients at risk of developing severe toxicities who might benefit from early therapeutic intervention. The immunomonitoring of patients treated with CD19 CAR T cells includes serum biomarkers like MCP-1, SGP130, interferon gamma, IL-1, eotaxin, IL-13, IL-10, macrophage inflammatory protein-1 alpha,3 4 27 as well as IL-6, IL-15, and TGF-β28 29 as independent predictors in statistical models assessing risk of CRS and neurotoxicity, respectively.
Animal models
Several animal models have been employed to predict CRS and ICANS (figure 2), starting with syngeneic mouse strains, comprising of intact immune cells and murine CAR T cells. These models have the advantage of recapitulating the complex crosstalk between CAR T cells and host immune cells.30 Allotransplantation studies of murine CAR T cells in mice with different degrees of immune deficiency were the first to suggest the requirement for a functional myeloid compartment to trigger CRS.30 CRS occurrence on infusion of human CAR T cells has not been observed in immunodeficient NSG (Non-Obese Diabetic Severe Combined Immunodeficient (NOD-SCID), gamma) mice but has been reported in SCID-beige mice, which feature a less compromised myeloid compartment. By using the SCID-beige model, it was possible to prove that this reaction is triggered by resident macrophages due to both contact-dependent and cytokine-related mechanisms, such as nitric oxide together with IL-1 and IL-6 release.31 Reconstitution of NSG mice with human hematopoietic stem/precursor cells (HSPCs) offers an alternative approach where human CAR T cells can interact with human myeloid cells and cytokines. However, the proportion of myeloid cells differentiating from human HSPCs in NSG mice rarely exceeds 5%–10% of human white blood cells.32 Therefore, a triple transgenic NSG mouse strain (SGM3) has been recently proposed to better support the reconstitution of a human hematopoietic system, including the myeloid compartment, due to the expression of human stem cell factor, GM-CSF, and IL-3.33 When HSPC-humanized SGM3 mice were employed, only monocyte–CAR T-cell interactions were found to recapitulate CRS, definitively confirming the primary role of myeloid-derived cells in releasing IL-1 and IL-6, both hallmark cytokines of CRS.10 In contrast to other models, humanized SGM3 mice were also able to recapitulate neurotoxic manifestations, which in this case cannot be ascribed to on-target, off-tumor reactions against mural cells but might be rather connected to CRS-related inflammatory reactions.10 Otherwise, having an immune system much more similar to humans, primates are excellent large animal models to better interrogate CAR T-cell toxicities but often require autologous CAR T cells and are deficient of tumor. Nevertheless, given the physiological similarities to humans, these models closely recapitulate CRS and ICANS development.34–36 Finally, biological similarities between canine and human cancer offer the possibility to test engineered T-cell strategies in dogs with naturally occurring tumors. In this regard, the Comparative Oncology Program of the NCI has established a network of 24 veterinary academic partners known as the Comparative Oncology Trials Consortium, which will support the implementation of cell-based trials in dogs for decision-making prior to clinical testing in humans. In return, information from human clinical trials can guide the development of cell-based therapies in veterinary oncology, under the so-called One Health initiative.37
In vitro models
In vitro coculture models that consist of monolayers of cells expressing the target antigen have been traditionally employed to test the specificity and efficacy of CAR T cells38 (figure 2). However, these models were not considered appropriate to predict adverse effects. More recently, other cells such as macrophages have also been included in cocultures of target cells and CAR T cells.11 39 40 Such models have facilitated mechanistic insights of CRS. Importantly, the measurement of biomarkers contained within supernatants from these cocultures can also inform about potential adverse events triggered by CAR T cells in vivo. In fact, recent data show that high levels of catecholamines found in cultures of human CD19 CAR T cells admixed with malignant B cells and macrophages correlated well with CRS seen in mice after CAR T-cell infusion. Accordingly, in patients with diffuse large B-cell lymphoma treated with CD19 CAR T cells, an association was observed between high levels of norepinephrine and severe CRS.40 Moreover, the rapid and massive death of target cells by pyroptosis, which is specifically triggered by CAR T cells, was found to activate macrophages to produce CRS-related cytokines.11
On-target and off-target, off-tumor toxicity (including cytopenias, B-cell depletion, and immune reconstitution)
Ideally, CAR T cells should selectively target malignant cells. However, target antigens are often expressed on both tumor cells and healthy tissues, raising concerns regarding on-target, off-tumor toxicity.41 The severity of toxic manifestations depends on how accessible, widespread, and vital the target tissue is. Reported events range from manageable lineage depletion, such as B-cell aplasia for CD19 CAR T cells,42 secondary hypogammaglobulinemia for BCMA CAR T cells,12 liver toxicity for CAR T cells targeting carboxyanhydrase-IX,43 to severe and fatal pulmonary toxicity for HER2 CAR T cells, possibly associated with recognition of low levels of ERBB2 on lung epithelial cells.44 With CD19 CAR T cells broadly used both in clinical trials and in the commercial setting for ALL and NHL, long-term B-cell depletion is the most commonly described on-target, off-tumor toxicity (figure 1). During normal B-cell development, CD19 is present from the pre-B-cell stage until the plasma cell stage. Long-term B-cell aplasia has been described in all the pivotal phase II CD19 CAR T-cell trials45–48 and contributes to hypogammaglobulinemia, increases the risk of infection, and may have consequences for the response to vaccinations.49 It is generally managed with intravenous immunoglobulin supplementation in pediatric patients; in adult patients, this is common practice only in patients with recurrent bacterial infections. Lymphopenia, in particular CD4+ T-cell lymphopenia, can persist for >1 year.50
Cytopenias (especially neutropenia) persisting >30 days postinfusion are common off-target side effects (30%–40%), the pathogenesis of which is currently unclear. Factors contributing to prolonged cytopenia (>90 days, occurring in 10%–20% of patients) include low baseline cell counts, prior therapies including prior SCT, impaired hematopoietic reserve, bone marrow infiltration and chronic inflammation reflected by higher baseline ferritin and C reactive protein levels,51 and alterations in levels of the chemokine CXCL12 in the marrow microenvironment correlating with events of late neutropenia, likely associated with B-cell recovery.52 The bone marrow is usually hypocellular.
Alternatively, off-target off-tumor toxicity can occur due to cross-reactive binding to a mimotope, which is a similar but distinct epitope expressed on normal tissues. This cross-reactivity or ‘off-target’ binding to cell surface proteins is difficult to predict in preclinical animal studies and can lead to serious adverse effects in patients. Even though CAR T-cell therapies have yet to demonstrate off-target effects mediated by inappropriate scFv recognition of a non-target antigen, TCR-engineered T-cell therapies have revealed the possibility of TCR promiscuity resulting in the death of a patient.53
Models for on-target and off-target, off-tumor toxicity
Animal models for on-target, off-tumor toxicity
When the expression of the target antigen is similar between human and mouse, but the antihuman antibody does not recognize the murine orthologue, on-target, off-tumor toxicity against healthy tissues expressing the molecule of interest can only be addressed in syngeneic models (figure 2). For example, strategies to overcome B-cell aplasia induced by CD19 CAR T cells have been successfully investigated in syngeneic models.54 Similarly, it has been recently shown that the administration of murine CD19 CAR T cells to mice of different strains, including NSG, can cause BBB leakiness and pericyte depletion, supporting the hypothesis that ICANS development in patients could also be the result of on-target, off-tumor recognition of CD19 on brain mural cells.23 However, a significantly lower degree of CD19 expression was observed in mice compared with humans, highlighting that species-specific differences may limit neurotoxicity evaluation in mouse models. In another scenario, when the target antigen has a similar expression profile in humans and rodents and the antibody recognizes both human and murine orthologues, it is possible to profile on-target, off-tumor toxicity using human CAR T cells in immunodeficient mice. For example, high-affinity human GD2 CAR T cells induced fatal encephalitis in NOD-SCID-Il2rg−/− (NSG) mice, possibly due to low GD2 expression on the cerebellum and basal regions of the brain.55 Similarly, in recent studies, the authors took advantage of the cross-reactivity of B7-H3 monoclonal antibodies with murine B7-H3 to investigate the safety of B7-H3 CAR T cells or antibody–drug conjugates both in immunodeficient and immunocompetent tumor-bearing mice.56 57 Alternatively, on-target, off-tumor reactions can be studied in immunocompetent transgenic mice that possess an intact immune system and stably express a transgene encoding for a human tumor-associated antigen (TAA). Transgenic mice are generated by knocking out a murine TAA and knocking in the desired human one alongside its regulatory elements, mimicking the spatiotemporal expression patterns as seen in patients. These mice can further be bred with tumor-prone mice or directly grafted with TAA+ tumors to test new CAR T-cell therapies prior to their clinical application, as in the case of carcinoembryonic antigen(CEA) transgenic mice treated with CEA CAR T cells.58 59 In addition, HSPC-humanized mouse models, as described previously, are extremely useful for studying on-target, off-tumor reactions against the hematopoietic compartment, as in the case of CD123 and CD44v6 target antigens.60 61 Finally, primate or canine models are also suitable for studying on-target, off-tumor events, as species-specific differences in antigen expression between non-human primates and humans are limited.36 ,37
Assessment of target antigen expression
For T cells that have been engineered to specifically bind to a target antigen, a detailed and careful assessment of the expression pattern of the target antigen in normal cells and tissues has to be completed. Antibody-based immunohistochemistry and transcriptomic analysis have been mostly used for exploring target antigen expression in normal tissues (eg, see Lichtman et al62) (figure 2). Published repositories of mRNA and protein expression (eg, Human Protein Atlas) are also commonly employed to evaluate candidate targets in normal and tumor cells. Recently, proteomic and genomic datasets have been generated and integrated with bioinformatics tools to search for optimal CAR targets expression in acute myeloid leukemia and not in normal tissues.63
Models for off-target toxicity
To ensure patient safety, it is imperative to minimize the risk of initiating an inappropriate immune response due to unanticipated off-target binding of CAR T cells to cell surface proteins expressed on normal tissues. Recently developed cell microarray technologies provide an understanding of the off-target profile of CAR T cells and demonstrate on-target specificity.64 The Retrogenix Cell Microarray Technology identifies interactions with both cell surface receptors and secreted proteins by screening scFvs or whole CAR T cells for binding against >4000 full-length proteins that are individually overexpressed in their native context in human cells. This platform, established in human cells, coupled with broad protein coverage, allows even low-affinity interactions to be detected with a high degree of sensitivity and specificity and can provide insights into potential off-target toxicities or soluble sinks for the therapeutic. Furthermore, if off-targets are identified, cell lines or primary cell types endogenously expressing the off-target protein can be used as target cells to assess potential off-target cytotoxicity and CAR T-cell activation. This platform is increasingly being used for CAR T-cell development,65 and data from these studies have been included in regulatory submissions, including the biologics license application for Novartis’s Kymriah.66
Graft-versus-host disease (GVHD) and rejection associated with allogeneic engineered T cells
The use of allogeneic CAR T-cell products generated using cells from healthy donors has the potential to overcome many limitations associated with autologous products but come with their own challenges, including the potential to induce GVHD (figure 1), as well as the risk of immune-mediated rejection by the host. The risk of GVHD, which correlates with increasing donor–recipient Human leukocyte antigen(HLA) disparity, could be mitigated through several approaches, including donor selection, cell-type selection, T-cell depletion/selection and/or use of gene editing. Indeed, gene editing of the endogenous αβ TCR typically includes the disruption of T Cell Receptor Alpha Constant(TRAC) or T cell receptor beta constant(TRBC) locus and reduces the risk of GVHD linked to the TCR recognition of allogeneic host tissue.67 Moreover, the use of gene edited T cells deficient in expression of CD52 has also been explored in combination with alemtuzumab to maintain a prolonged conditioning regimen without affecting CAR T product persistence.68 69 Although this approach could theoretically control GVHD, some concerns have been raised about prolonged lymphodepletion regimens, during which viral reactivations can be problematic.70 Other worries regard the fate of a T cell in the absence of its own TCR.71 This is evident in the UCART19 approach, where preliminary clinical data72 show short UCART19 persistence. Notably, although TCR disruption has been developed, it should be underlined that clinical experience to date with allogeneic CAR, both virus-specific or from allogeneic transplant donors, has shown antitumor effects with minimal GVHD risk.73 74
Models for GVHD and rejection
MHC-disparate allogeneic mouse models can be employed to study GVHD (figure 2). For example, these models demonstrated that cumulative signaling through the exogeneous CAR and the endogenous alloreactive TCR results in a reduced risk of developing GVHD due to loss of function and possible deletion of transferred T cells.75 Similarly, xenoreactions in immunodeficient mice can be exploited as surrogate markers for the potential of human engineered T cells to cause GVHD in patients.76–78 In these models, GVHD scores have been defined and applied based on multiple parameters, such as progressive weight loss, excessive T-cell expansion, ruffled fur, hunchback, and T-cell infiltration of GVHD target organs. Importantly, standard xenograft models cannot be employed to evaluate the rejection potential, which instead can be assessed in allogeneic mouse models. More sophisticated alternatives can be found in the HSPC-humanized SGM3 models, described earlier, where the GVHD potential can be measured as reactivity against human hematopoietic cells developed from allogeneic CD34+ donors. In these models, rejection potential can be evaluated by assessing the time required to develop endogenous human T cells that should be able to mediate rejection of allogeneic engineered T-cell products.10 Alternatively, allogeneic immune responses can be studied in vitro with mixed lymphocyte reactions by coculturing allogeneic CAR T cells with peripheral blood mononuclear cells(PBMCs) from different donors. Furthermore, alloreactivity can also be assessed in vivo in immunodeficient NSG mice by coinfusion of allogeneic CAR T cells with PBMCs from HLA disparate donors, followed by evaluating engraftment of CAR T cells.79 80
Cross-reactivity (TCR-T cells)
TCR cross-reactivity is a major safety risk of TCR gene therapy (figure 1). TCRs recognize peptides presented by HLA class I and class II molecules. TCR binding involves interactions between the complementary-determining region (CDR) loops of the TCR and amino acid residues of HLA molecules and the HLA-presented peptides.81 The binding of TCRs can therefore be broken down into a peptide-independent HLA binding component and a peptide-specific component.81 Both components are required to achieve the appropriate binding affinity required for T-cell activation. Thus, cross-reactivity can be caused by two distinct mechanisms: (1) TCRs may cross-react with one of the numerous peptides that are presented by the HLA allele that is used by the TCR to recognize its cognate ligand, and (2) cross-reactivity with distinct HLA alleles presenting a library of peptides to which the TCR is not tolerant may occur. Therapeutic TCRs will only be tolerant to the HLA alleles that were present in the individual from whom the TCR was isolated but not to other HLA alleles that are present in patients treated with TCR gene therapy. In the clinic, rare, although in some cases severe, toxicities have been reported, mainly with artificially enhanced TCRs to date. In particular, TCRs can be modified by different methods, including affinity maturation of their CDRs in order to increase their affinity for the target antigen. Although effective, this approach overcomes the negative selection exerted by the thymus to delete autoreactive T cells and may increase cross-reactivity and recognition of non-target-specific peptides. Patients treated with an affinity-enhanced TCR specific for a MAGE-A3 epitope presented by HLA-A*01 suffered fatal off-target off-tumor cardiac toxicity due to cross-reactivity to titin,53 82 an event completely unpredicted by preclinical studies at the time. Although all TCRs have the potential for cross-reactivity, this risk is significantly heightened in the context of affinity matured TCRs and has not been observed with T cells engineered to express other therapeutic TCRs within the normal affinity range.
Models for cross reactivity
The two types of cross-reactivity as described previously need to be assessed using different strategies. HLA alleles that were not present in the TCR donor can stimulate strong T-cell responses. In fact, allogeneic HLA molecules are among the most immunogenic antigens, with 1%–10% of ‘naïve’ T cells responding to peptide-presenting allogeneic HLA molecules. It is therefore critical to exclude alloreactivity of therapeutic TCRs. This can be achieved by extensive in vitro screening with panels of cell lines expressing diverse HLA alleles, and testing of TCR-engineered T lymphocytes against cells from the patient before treatment (figure 2). The potential for cross-reactivity of alternative peptides presented by the HLA allele presenting the ‘cognate’ peptide requires careful analysis. In order to assess this risk, it is useful to define the fine specificity profile of the therapeutic TCR by changing individual residues in the cognate peptide and to measure which changes result in loss of T-cell activation. This analysis reveals peptide residues that are essential for TCR binding and T-cell activation and also residues that can be substituted without loss of TCR binding. The essential residues form a peptide motif that can be used to screen human exome databases and to identify human proteins that contain this motif. The corresponding peptides can be synthesized and tested for recognition by the therapeutic TCR. Peptide titration experiments are important to assess whether TCR recognition occurs only at unphysiological high peptide concentrations or also at more physiological low concentrations. Stimulation of TCR-engineered T cells with cells endogenously expressing the corresponding protein is essential to determine whether ‘natural’ antigen processing and presentation produces the cross-reactive peptide and leads to T-cell activation.
Conditioning
Administration of lymphodepleting regimens, commonly comprising cyclophosphamide and fludarabine, prior to adoptive T-cell transfer is a key step for the clinical success of engineered T-cell therapies.5 72 83–85 To achieve better T-cell engraftment, expansion and persistence, lymphodepletion likely works through multiple mechanisms: (1) decreased immunosuppressive environment,86–88 (2) increased availability of homeostatic cytokines,89 90 and (3) reduced antitransgene immune-mediated rejection.91 However, lymphodepletion also results in several hematological toxicities (neutropenia, anemia, and thrombocytopenia),92 infectious complications,72 93 94 increased incidence and severity of CRS,3 and, in some cases, tumor lysis syndrome95 (figure 1). Other rare adverse events, including hepatotoxicity96 and leukoencephalopathy,97 have also been reported as associated with lymphodepletion in two clinical trials.
Fludarabine and cyclophosphamide-induced lymphodepletion are used in non-human primates98 and immunocompetent mouse models99 100 to study antitumor activity of CAR T cells, but to date, animal models predicting the toxicity of lymphodepleting regimens are still missing.
Insertional mutagenesis and clonal dominance
Retroviral or lentiviral vectors, which are used to engineer T cells, have been linked to rare cases of insertional mutagenesis in humans, as reviewed recently101 (figure 1). The semirandom integration of the vectors in the genome of host cells can lead to the insertion of enhancer or promoter sequences, or to the disruption of genes involved in cellular proliferation or cancer. Differences in integration site selection have been linked to the differential risk of genotoxicity of vector systems.102 Severe adverse events caused by insertional mutagenesis have occurred so far only with the use of gene-modified hematopoietic stem cells (HSCs), whereas no such evidence was found in the long-term follow-up of patients with retroviral gene-modified T cells.103 However, several instances of vector-induced clonal dominance have been recently reported in CAR T-cell protocols. Insertion of a lentiviral vector into the TET2 gene was observed in a T-cell clone whose expansion was associated with Chronic lymphocytic leukemia (CLL) tumor eradication. The integration of the vector in the TET2 gene spliced out its catalytic portion promoting CAR T-cell proliferation and effective therapy.104 105 Another case of clonal expansion associated with insertion into the E3 ubiquitin-protein ligase CBL(CBL) gene has been reported in CD22 CAR T cells, although it is not clear if CBL gene disruption was directly causing T-cell proliferation.106 Genetic engineering of T cells with the Clustered Regularly Interspaced Short Palindromic Repeats(CRISPR)/Cas system, recently initiated in clinical studies mainly ongoing in China, has provided limited information on safety in humans.107 Notably, a recent study from a phase I clinical trial reports that two patients treated with CD19 CAR T cells engineered by the piggyBac transposon system as a gene transfer tool developed T-cell lymphomas.108 Allogene Therapeutics, a biotechnology company pioneering the development of allogeneic CAR T therapies for cancer (AlloCAR T), has recently reported that, prompted by a chromosomal abnormality found in a bone marrow biopsy taken from a patient following the development of progressive pancytopenia, the Food and Drug Adminsistration (FDA) has placed a hold on the company’s AlloCAR T clinical trials.109 An investigation is currently under way to further characterize the observed abnormality, including any clinical relevance, evidence of clonal expansion, or a potential causative relationship of gene editing using transcription activator-like effector nuclease (TALEN) technology.
Models for genotoxicity including CRISPR and TALEN
Models for assessing genomic integration of a given vector mainly include cell-based assays that were largely derived to model the leukemias observed in HSC-based gene therapy (figure 2). T-cell culture assays were designed to model the recurrent activating vector insertions in the Lmo2 locus noted in early human SCID gene therapy trials. For instance, a murine thymocyte culture assay can reproduce the insertions in Lmo2 as well as other T-cell proto-oncogenes (including Mef2c) that functionally associate with developmental arrest and possible transformation.110 A second assay termed in vitro immortalization assay reports the induction of replating activity in primary murine hematopoietic cells as a result of insertional activation of proto-oncogenes such as Evi1.111 A third assay uses an immortalized murine cell line (BAF3) to measure the frequency of IL-3-independent mutants that arise as a result of vector-induced insertional mutagenesis.112 Probably the most common assay to assess transformation of engineered T cells tests for antigen-independent and IL-2 independent proliferation.
Genetic engineering of therapeutic cell types, including CAR T cells, has entered a new era with the advent of site-specific genomic modifications with the CRISPR/Cas system or with transposases. Although elegant with respect to targeting modifications in a highly precise manner, CRISPR/Cas gene editing is associated with potential side effects stemming from off-target cleavage of the genome, which can be hard to predict. Off-target effects include insertions/deletions (indels) of genomic information at unwanted sites and can lead to chromosomal translocations between on-target and off-target sites or between different off-target sites. There is a battery of in silico, in vitro, and ex vivo methods available for genome-wide assessment of off-target cleavage sites, including the bioinformatics tools MIT CRISPR Design Tool113 or E-CRISP,114 GUIDE-seq,115 Digenome-seq,116 CIRCLE-seq117 and SITE-seq,118 and CAST-seq, which have recently been established to assess chromosomal translocations in edited cells.119 Cumulatively, these methods indicate that (1) the number of off-target sites and the efficiency at which they are modified strongly depend on the individual gRNAs; (2) in silico methods can fail to predict experimentally determined off-target sites (false negatives); and (3) in vitro methods may report false positives and hence lack sufficient specificity.120
We have highlighted in this chapter the different toxicities encountered in patients infused with engineered T lymphocytes. They range from moderate and manageable to very severe and life-threatening. Although some progress has been made in our understanding of these adverse events, there is still a lot to learn about their underlying mechanisms. Current preclinical models, which mainly consist of in vitro coculture systems and standard xenograft models in immunocompromised mice, have mostly failed to predict these toxicities to date. However, we have also presented new tools and models that can better recapitulate toxicities observed in patients.
Identification of gaps in current preclinical models
Gaps in models for CRS and ICANS
Biomarkers
Traditional preclinical modeling of toxicities usually assesses predefined subsets of biomarkers such as proinflammatory cytokines for predicting CRS and ICANS.121 However, recent ex vivo analyses revealed a much more complex picture. For example, single-cell RNA sequencing of axicabtagene ciloleucel CD19 CAR T-cell samples identified a rare monocyte-like cell population that was significantly over-represented in the infusion products of patients who developed high-grade ICANS.122 Thus, new generations of preclinical safety models must combine unbiased ex vivo assessment using multiparametric measurements (longitudinal, single-cell, or spatial resolution, if needed), in order to uncover detailed mechanistic insights of engineered T cells that eventually lead to severe toxicities (table 1). Following the conceptional framework of the imSAVAR consortium, a public–private partnership (https://imsavar.eu/), immune-related adverse outcome pathways support scientists of diverse expertise to describe the current knowledge of complex processes leading to toxicities at the molecular, cellular, and organ/organism levels, thus facilitating systematic biomarker development and guidance of preclinical safety model development.
Animal models
To date, different animal models are required to predict CRS and ICANS due to the need of mirroring complex immune interactions at both cellular and molecular levels. Indeed, the human immune system and tumor microenvironment (TME) are not completely recapitulated in single-animal models requiring a careful selection, depending on the primary objectives of the study. Syngeneic immunocompetent mouse models are biased by the murine nature of engineered T cells and CAR constructs, which prevent firm conclusions from being drawn on the corresponding human CAR T-cell products. For example, the intravenous injection of a humanized superagonistic CD28-specific antibody in humans induced a strong CRS that was not predicted in syngeneic mouse models.123 Subsequently, the different CD28 signaling properties between humans and mice have been associated with a single amino acid variant in the C-terminal proline-rich motif that regulates nuclear factor kappa B(NF-κB) activation and proinflammatory cytokine gene expression.124 Possibly related to this, syngeneic models have been largely unable to mimic CRS induced by CAR T-cell products. Furthermore, adverse events may vary between syngeneic mouse models, as seen when targeting CD19 and NKG2D, so extreme caution is required if considering using only a single mouse strain before moving into the clinic.23 125 126 On the other hand, while standard xenograft models in highly immunodeficient mice (eg, NSG) suffer from the lack and/or dysfunctionality of crucial hematopoietic components such as myeloid cells and cytokines, the use of less immunocompromised mice (SCID-beige) still requires overcoming species-specific barriers, preventing the physiological development of CAR T cell-related toxicities but requiring very high CAR T-cell doses and tumor burdens.31 While HSPC-humanized SGM3 might overcome these limitations, they still suffer from high complexity, heterogeneity, prohibitive costs, and the long time frame required to achieve human reconstitution. In these mice, the employment of xenotolerant T cells generated in a first round of humanization can abate the occurrence of GVHD, allowing for long-term monitoring and eliminating sources of confusion in the interpretation of toxicity.10 However, this approach has limited translational potential due to the impossibility of testing the final patient-derived engineered T-cell products.10 Moreover, limited data are available regarding fully humanized mouse models, with variation in the results probably ascribable to the different source of CD34+ cells. While different models proved useful to study CRS, the advance in animal modeling required to fully recapitulate ICANS development appears more difficult, especially in light of recent reports showing that this reaction may have a complex origin, including both an on-target, off-tumor and a genuine inflammatory component.23 Lastly, reports collecting data using primates are highly demanding in terms of costs and timing and are further biased by the absence of tumor cells,34 36 limiting the proper evaluation of CAR T-cell antitumor potential and related side effects. Moreover, while autologous primate immune cells can be transduced, their characteristics and performance might be different compared with human T cells. Finally, small group sizes are often required when dealing with large animal models due to both ethical and economic reasons.
In vitro models
Two-dimensional (2D) coculture models present several limitations as they do not model well the complexity of the TME. However, additional layers of complexity can be introduced in these models to better mimic the conditions that CAR T-cells encounter in the TME and that lead to CRS and ICANS adverse events (table 1). For instance, inclusion of endothelial cells with the tumor cells in CAR T-cell cocultures would recreate cellular interactions prone to producing inflammatory cytokines measured in the serum of CAR-T cell-treated patients. Three-dimensional (3D) models such as organoids are now widely used to culture cancer biopsies. They possess a number of advantages as they closely resemble many aspects of the patient’s tumor.127 However, organoids usually only contain tumor cells and not immune components. Over the past years, efforts have been made to include immune cells, offering the possibility to recreate a favorable environment to test engineered T-cell efficacy,128 but hopefully also to predict CRS and ICANS.
Gaps in models for on-target and off-target, off-tumor toxicities
Animal models
The use of syngeneic mouse models can be extremely useful when the expression profile of the target antigen is similar between humans and mice. However, human biology is not always recapitulated by mouse biology, thus limiting insights regarding human CAR T-cell behavior (see also animal models for CRS and ICANS).124 Moreover, off-target toxicity is difficult to assess in mouse xenograft models due to differences in off-target expression profiles, biology and low protein homology between mice and humans. Hence, mouse models generally lack the ability of the human CAR T-cell products to interact with the endogenous mouse tissue for observation of toxicity. Despite their unique utility in studying on-target, off-tumor toxicities, the generation of transgenic immunocompetent mice for multiple human TAAs is a laborious task (table 1). These mice have the great advantage of including a functional immune system and TME. However, apart from the antihuman single-chain fragment variable (scFv), the rest of the CAR construct and the T cells must be of murine origin, hence making it impossible to test the therapeutic potential of clinically relevant fully human T-cell products.
Assessment of target antigen expression
Studies on TCR and CAR targets rarely take into account their body-wide expression, thus underestimating the toxicity risks across normal organs. Efforts like the Human Cell Atlas, whose goal is to generate a large-scale gene expression database across all human cell types, will enable the systemic identification of specific target antigens.129 Traditional immunostaining techniques lack sensitivity and are limited to a few markers on the same tissue section. Novel multiplex imaging approaches (GeoMx, MACSIMA, and MultiOMICs) offer the possibility to precisely locate the target expression in a large panel of normal tissues (table 1). However, the detection of the target antigen in normal tissues is not sufficient to prove that CAR or TCR recognition will have deleterious effects. Monitoring of engineered T-cell responsiveness against human normal cells (2D, 3D, organoids, and organotypical models) represent more predictive approaches. In that regard, the recent advent of ex vivo human models derived from primary tissue explants and biopsies from healthy donors incubated with engineered T cells130 have great potential to test on-target, off-tumor toxicity. This idea has been used to assess the toxicity of CAR-NK92 cells against EGFRvIII and FRIZZLED using antigen-positive colon cancer organoids and normal colon organoids.131 EGFRvIII-CAR showed selective cytotoxicity against colon cancer organoids. However, FZD-CAR showed cytotoxic engagement of target organoids regardless of its origin, highlighting potential on-target, off-tumor toxicity concerns.131
Gaps in models for GVHD and rejection
Currently, several limitations affect the employment of the existing preclinical models to predict GVHD and rejection of infused T cells. While syngeneic and allogeneic mouse models suffer from the impossibility of testing human T-cell products, in standard NSG xenograft models, GVHD manifestations are unlikely compared with those observed in treated patients due to the presence of more complex cellular and molecular interactions (table 1). Indeed, many factors may limit comparability of xenogeneic GVHD data to clinical outcomes, including the lack of control groups treated with GVHD prevention, the use of irradiation as the only source of conditioning, and the homogenous microbiome in mice housed under pathogen-free conditions.132 Additionally, the mechanism of xenogeneic GVHD does not completely recapitulate the underlying pathogenesis in human GVHD. For example, it is donor APCs rather than host APCs that activate human T cells in the xenogeneic GVHD model, whereas host APCs play a significant role in human GVHD.133 On the other hand, only limited data are currently available from more sophisticated animal models, such as HSPC-humanized SGM3 mice, due to their typical time-consuming and cost-prohibitive features. Here, variability in the level of human reconstitution, together with the extent of human cell development represent the major barriers in deciphering reactions against or caused by engineered T cells. In addition, the complexity of these settings is also dictated by the specific time frame in performing the experiments, as definite cellular interactions are only feasible in specific time intervals. Therefore, there is complexity both in setting experimental conditions and in the interpretation of the results. Further investigations are still required to increase the predictive value of these models.
Gaps in models for TCR T-cell toxicities
There are no robust in vivo models to assess the toxicity of TCRs driven by alloreactivity or peptide-specific cross-reactivity. Immunodeficient xenogeneic murine models are often used to obtain evidence of efficacy of TCR-engineered human T cells, but toxicity assessments are limited by the lack of HLA molecules in the host and by the poor persistence of human T cells in mice. Although the use of HLA transgenic mice can overcome some of these limitations, differences in the HLA-presented peptides in transgenic mice and humans remain a substantial limitation in assessing TCR toxicity. Hence, the in vitro platforms described earlier provide the most valuable TCR safety data prior to the progression to clinical trials.
Gaps in models for genotoxicity including CRISPR and TALEN
Both the FDA and the European Medicines Agency (EMA) recommend that the genotoxic impact of genetic engineering be evaluated in gene therapy products before a clinical application can be approved. These evaluations include investigation of abnormal cell behavior following gene modification, identification, and characterization of both intended and unintended genomic alterations and assays of toxicity mechanisms. In the case of genetically modified T cells, none of the functional assays described earlier provide clinically relevant safety measures. Cell-based models are limited by species (generally applicable only to murine cells), cell type (applicable mostly to HSPCs), the genes whose deregulation is predominantly scored (Evi1 or Lmo2), and an easily scorable phenotype (growth factor-independent culture conditions). Animal disease models are by definition heterologous, more labor-intensive and mainly suffer from being dependent on cancer-predisposing genotypes, and thus may not reflect the tumorigenic potential of genetic engineering in primary human cells. Currently, the high-resolution vector insertion site studies are a direct way to assess eventual clonal dominance, but appropriate cell-based tools for the comprehensive and functional assessment of phenotypical consequences of genetic engineering in therapeutically relevant human T-cell types are lacking.
In conclusion of this chapter, the models currently used to predict adverse events associated with engineered T cells have limits which we have discussed. One may simply state that in vitro coculture models are human but not systemic, whereas animal models are systemic but not human. We have highlighted the need to develop alternative approaches (eg, 3D ex vivo human models) to more effectively predict adverse events associated with engineered T cells and to understand the pathophysiological processes of these toxicities.
Regulatory view
For granting a clinical trial approval or subsequently obtaining marketing approval, the non-clinical safety testing of engineered T cells requires a tailored toxicity program that considers both the complex nature of these medicinal products and the limitations of currently available animal models. Some of the toxicities indicated previously, including CRS and neurotoxicity, are commonly observed in patients treated with engineered T cells. Moreover, the severities of these toxicities are largely dependent on patient-specific factors such as tumor burden, and thus are difficult to mimic in animal models. Consequently, omission of investigating these potential toxicities in non-clinical studies is widely accepted by regulators. Instead, appropriate risk mitigation strategies including a close monitoring of the treated patients are mandated. Clinical trials that include innovative designs should consider phase I studies characterized by a de-escalating/escalating approach,134 split doses,135 and tumor burden-adjusted doses. Other toxicities, mainly those related to the antigen specificity of engineered T cells (eg, on-target, off-tumor toxicity, cross-reactivity, alloreactivity and potential mispairing of TCR T cells) need to be addressed in non-clinical studies. Pivotal safety studies are expected to be performed in compliance with good laboratory practice, unless the complexity of the model used precludes this.136 For T cells that have been engineered to specifically bind to a target antigen, a detailed analysis of the expression pattern of the target antigen in human cells, tissues, and organs has to be completed using gene expression databases and in vitro analyses. The recently updated EMA guideline on genetically modified cells provides valuable information and dedicates a specific chapter to the non-clinical development of genetically modified immune cells.137 Of notice, both the FDA and the EMA support the 3R principles to reduce, refine, and replace animal use in preclinical development and testing.138 This encouragement is aligned with ethical and animal welfare considerations that demand that animal use during preclinical testing is limited, and preferably avoided, as much as possible. Regulatory acceptance of the 3R method is based on, among other considerations, the availability of defined test methodology including standard protocols with clearly defined and scientifically sound endpoints, as well as on the reliability and robustness of the tests.
Conclusion and future perspectives
Toxicity is a crucial aspect of drug testing and development. Cell therapies relying on engineered T cells are no exception, and very severe adverse events have been observed after infusion in patients. Moreover, in contrast to classic drugs, engineered T cells expand and persist long term in the patient’s body, engaging other immune cells and thereby presenting unique toxicity profiles. As stated in our review, acute toxicities of CAR T cells were first observed clinically and not in preclinical models, which at the time were mostly designed for efficacy testing and mainly consisted of in vitro coculture systems and in vivo models established in immunocompromised mice. Clearly, these models have limits.
There is an urgent need to develop preclinical assays that more effectively predict adverse events associated with engineered T cells but also to understand the pathophysiological processes of these toxicities. Over the past few years, efforts have been made to generate HSPC-humanized mouse models that have proven very useful to predict CRS and neurotoxicity observed in patients after CAR T-cell infusions. Although animals represent multisystem organisms, humanized mice still exhibit deficits in the development of a complete human immune system. Moreover, due to their high costs, complexity of implementation, and ethical considerations, it is important to develop alternative approaches. The recent advent of ex vivo human models, especially organoids and organotypical systems, offers a great opportunity as they can emulate human biology and in principle predict, in a personalized manner, some of the toxicities elicited by engineered T cells. Of course, there are fundamental challenges when working in vitro, one being the quantitative translation from an in vitro to an in vivo effect, the other one having all cells and components (eg, myeloid cells and endothelial cells) responsible for adverse events induced by engineered T cells present in the models. Armoring of engineered T cells (eg, with a dominant negative TGF-β receptor139) further increases complexity and creates additional need for better translatable models.
In conclusion, predicting engineered T-cell toxicities using preclinical models is still in its infancy. Due to its complexity, engineered T-cell safety assessments should not rely on a single model but span a large battery of in silico, in vitro, and in vivo tools. Importantly, next-generation models should be designed as screening tools for both efficacy and toxicity testing of new engineered T cells. There are indications that innovative animal models, such as the humanized SGM3 model, are sensitive enough to appreciate fine/slight differences between CAR T cells generated from different starting cell sources.140 However, the robustness of such animal models in comparing cell products with similar properties remains to be verified. The validation of existing and novel preclinical models will contribute to the selection of cell products with improved safety and enhanced therapeutic value.
Ethics statements
Patient consent for publication
Ethics approval
This study does not involve human participants.
Acknowledgments
This project received funding from the Innovative Medicines Initiative 2 Joint Undertaking (grant agreement number 116026). This Joint Undertaking receives support from the European Union’s Horizon 2020 Research and Innovation program and European Federation of Pharmaceutical Industries and Associations (EFPIA).
References
Footnotes
MA, BA, SA, CB, BDA, RC, DE, AG, CH, ZI, CK-M, MJK, UK, CK, BL, FL, IM, JM, MAM, EM, HN, CQ, MR, KR, MR, ER, CS, HS, MT and JVdB contributed equally.
Contributors ED and MC conceived, contributed to and revised the manuscript. MH revised the manuscript. ML assisted with figure processing. All the other authors contributed equally and are listed in alphabetical order.
Competing interests ML is an inventor on a patent application related to CAR T-cell therapy filed by Philipps-University Marburg and the University of Würzburg. SA is an inventor of a patent in the field of adoptive T-cell therapy. CB received a research contract from Intellia Therapeutics and participated in the advisory boards of Molmed, Intellia Therapeutics, TxCell, Novartis, GSK, Allogene, and Kiadis, and is an inventor of patents in the field of adoptive T-cell therapy. DE’s PhD is cofunded between the academic lab led by ED as PhD supervisor and the industrial partner Invectys. ER is an inventor of a patent in the field of adoptive T-cell therapy. MT holds licensed patent related to CAR T cells. MH is an inventor on patents related to CAR T-cell therapy filed by the University of Würzburg. MC is an inventor of patents in the field of adoptive T-cell therapy. CH is an employee of Janssen R&D and shareholder of Johnson & Johnson stock. IM, BL, CH, and HN are full-time employees of Servier. RC, CKa, and JM are employees of Takeda Pharmaceuticals. MJ discloses research support from Kite/Gilead; honoraria for advisory boards, presentations and travel support from Kite/Gilead, Novartis, Celgene/BMS and Miltenyi Biotech (all to institution). All other authors state no potential competing interests.
Provenance and peer review Not commissioned; externally peer reviewed. | https://jitc.bmj.com/content/10/5/e003486 |
This year’s annual SynBioBeta SF 2016 conference featured a lively panel discussion on how stakeholders in the synthetic biology industry can go about safeguarding the bioeconomy. Panelists from the FBI, DARPA, academia, and industry all weighed in on this topic that is both fascinating and timely for the field of synthetic biology. The panel stimulated critical avenues of discussion about the future of biosecurity and left attendees to consider many important questions. Here we will continue the conversation that was started two months ago in San Francisco through an extended interview with one of the distinguished panelists: DARPA program manager Renee Wegrzyn.
- Can you define what you mean by “bioerror,” a term that you used during the panel discussion that was unfamiliar to many attendees?
While there is no official definition of “bioerror,” it can be interpreted as an adverse biological event that is the result of an accident, mistake, or insufficient understanding of biosafety and biosecurity. Unlike bioterror, bioerror is unintentional. An incident of bioerror could potentially threaten an environment or population of organisms if it extends beyond an otherwise contained environment.
- How do we go about standardizing and measuring biosafety and biosecurity?
It is difficult to measure the effectiveness of biosafety and biosecurity measures because their success is defined by a lack of incidents. However, we have a responsibility to mitigate any foreseeable potential risks. There are some great examples of how this can be approached. For example, each research institution must comply with the standards and requirements of their local Institutional Biosafety Committee (IBC), or national standards such as the NIH Guidelines for Recombinant and Synthetic Nucleic Acids and other similar documents that provide a comprehensive approach on how to assess and mitigate risk in biological research.
One of the goals of [DARPA’s] Safe Genes program is to advance the scientific basis for ensuring biosafety and biosecurity as synthetic biology continues to evolve. In furtherance of that goal, researchers that are funded under the Safe Genes program are required to define quantitative metrics that will be used to assess the biosafety and biosecurity performance of their systems. For example, some of the notional examples of quantitative metrics that were provided to proposers in the original proposal solicitation include the development of genome editing controllers that result in off-target mutations that do not exceed natural mutation rates (e.g., 1×10-9 mutations per base pair per generation for insects and 2×10-8 for mammalian cells); and genome editing controllers whose performance does not degrade over time (over N number of generations).
With regard to novel countermeasures and inhibitors of genome editing, another quantitative safety performance metric could include the absence of toxicity or immunogenicity to the host (e.g., the edit does not elicit a host immune response).
- How can we go about regulating independent bioengineering outside of an institutional setting?
One of the best ways, which is being championed by Ed You at the FBI, is to engage with these communities. Ed emphasizes the responsibility of all scientists to “safeguard” the science (listen to Ed You discuss the future of biosecurity on the Science Friday podcast here).
Independent bioengineering, DIY-bio, and biohacking provide great avenues for people to explore and learn about biology. However, it is in everyone’s interest to ensure that it is done safely. Therefore, it is important that these communities are informed of the risks of bioengineering, know where to go for relevant guidance on safety, and know who to contact should they need assistance.
- Is there a coordinated program on safety that incorporates all major synthetic biology stakeholders so that we can set the standards for the field?
Most risks posed by synthetic biology are similar to the risks of more traditional genetic engineering. In the laboratory, such risks are mitigated by compliance with the NIH Guidelines for Recombinant and Synthetic Nucleic Acids (and, for clinical studies, by FDA and other regulatory bodies). Biosecurity and dual-use risks are considered through the institutional policy for Dual-Use Research of Concern (DURC) and the Select Agents Regulations. Potential environmental risks are generally captured as a biotechnology product undergoes regulatory assessment and approval. Although these are not officially coordinated into one program, [the different programs] work together to help ensure biosafety and biosecurity of synbio research and products. As mentioned above, it is anticipated that DARPA’s Safe Genes program will strengthen the scientific foundation upon which additional safety standards may someday be developed and adopted.
- What would it take to capture the dark side of the public’s imagination and cause the whole synthetic biology industry to shut down?
Although particular applications may capture the public imagination (in both good and bad ways), it’s highly unlikely that synthetic biology as a whole would rise and fall as one entity. Synthetic biology is a diverse field and is comprised of various technologies that contribute to medicine, energy, the environment, consumer goods, etc. It is inextricably linked to the progress of modern technological development.
- We have been deploying biotech for nearly 35 years. What are the major lessons from the past that we’ve gathered so far?
Traditional biotechnology (e.g., genetically engineered corn and engineered bacteria for pharmaceutical production) has been used for decades and has been shown to have a safety profile similar to non-engineered products. However, as we apply more advanced biotechnology techniques, it becomes increasingly important to incorporate safety early in the design process. | https://synbiobeta.com/keeping-biosecurity-forefront-renee-wegrzyn/ |
DURHAM, N.C.—Researchers have demonstrated a new, highly precise way to switch the control sequences of the human genome on or off without changing the underlying DNA sequence.
Originally discovered as an antiviral system in bacteria, CRISPR (clustered regularly interspaced short palindromic repeats) is one of the hottest topics in genetic research today. By engineering a version of that system, researchers can both edit DNA sequences and control which genes are used.
Previous studies, however, showed that editing human DNA sequences with the system is not always as precise as researchers would like. Those results raised concerns about the use of CRISPR technology in studying human diseases.
As a potential solution, some researchers sought more precision by using CRISPR to target portions of DNA that control which genes are active without modifying any coding sequences of the genome. Instead of using the genetic cutting tool Cas9 that is often employed with CRISPR, they delivered proteins that affect the genes’ packaging system, effectively turning them on and off.
While the technique had been shown to work well, whether or not it had off-target effects wasn’t proven. Now, a team of researchers from Duke University have shown that these gene- controlling methods are capable of the high degree of precision required for basic science and medical research.
The power to control the genome’s switches would be especially important for studying and potentially treating human diseases such as cancer, cardiovascular disease, neurodegenerative conditions and diabetes, which can be driven by mutations in control regions of the genome. The hope is that overriding one of these switches could uncover and fix the root causes of many diseases. It could also help researchers understand and change how different people respond to drugs.
But only if the CRISPR technique is specific enough.
Soon after CRISPR was first described for editing human genes, several papers revealed that the technique can sometimes have off-target effects. This presents problems for gene therapy treatments and fundamental science projects, where researchers want to alter the function of specific genes without causing unintended side effects.
An alternative strategy was developed to switch on and off the genomic regions that control when genes are used without modifying the DNA sequence at all. In some cases, these switches can control several related genes at once, allowing researchers to strike chords instead of individual notes.
Gersbach turned to Reddy and colleague Gregory Crawford, who all work together in adjacent laboratories and offices in Duke’s Center for Genomic and Computational Biology, for help with these more specialized techniques.
Reddy has focused his career on investigating how gene switches work across the human genome, how those switches differ between individuals and the implications of these insights for human traits and diseases. Crawford, associate professor of pediatrics, has spent more than a decade developing techniques to identify control regions across the genome and how they vary between cell types, during development or in response to drug treatment.
It fell to Pratiksha Thakore, a Ph.D. student in Gersbach’s lab, to integrate the expertise of all three laboratories for studying the specificity of CRISPR in controlling these switches. While the results can’t prove that every experiment will have the same high level of precision, they provide a blueprint for researchers to assess these effects.
This work was supported by the National Institutes of Health (NIH) Roadmap Epigenomics Project (R01DA036865) and NIH grants (U01HG007900, R21AR065956, P30AR066527, DP2OD008586), the National Science Foundation (CBET-1151035) and the American Heart Association (10SDG3060033). | http://ddn-news.com/index.php?pg=132&articleid=10046 |
Loading, Please Wait...
CAMBRIDGE, Mass., Oct. 24, 2019 (GLOBE NEWSWIRE) -- Intellia Therapeutics, Inc. (NASDAQ: NTLA), a leading genome editing company focused on the development of curative therapeutics using CRISPR/Cas9 technology is presenting one oral presentation and four poster presentations at the 27th Annual Congress of the European Society of Gene and Cell Therapy (ESGCT) meeting taking place October 22-25, 2019, in Barcelona, Spain.
“We are excited to share progress across Intellia’s in vivo and ex vivo programs at this important scientific venue,” said Laura Sepp-Lorenzino, Ph.D., chief scientific officer, Intellia Therapeutics. “Our data shows the complexity of the edits we are able to make with CRISPR/Cas9, while achieving important therapeutically relevant results. We are building on the success of our modular platform now having demonstrated consecutive targeted knockout and insertion genome edits in preclinical studies. Additionally, we presented data from our engineered cell therapy program, which continues to demonstrate the use of CRISPR/Cas9 for combined knockout and targeted integration in human T cells.”
Intellia Demonstrates Consecutive In Vivo Genome Editing in Alpha-1 Antitrypsin Deficiency Mouse Model
Intellia’s oral presentation highlights its alpha-1 antitrypsin deficiency (AATD) study showing that consecutive dosing of two distinct lipid nanoparticle (LNP) formulations, in adult mice, achieves two targeted genome editing events, resulting in knocking out the faulty gene and restoring therapeutic levels of normal alpha-1 antitrypsin protein (hAAT). Intellia’s approach for AATD uses a modular hybrid delivery system combining a non-viral LNP which encapsulates CRISPR/Cas9 with an adeno-associated virus (AAV) carrying donor DNA template. Compared to traditional viral-based delivery of gene editing components, Intellia’s LNP delivery system can overcome the inherent limitations of immunogenicity to facilitate multiple in vivo gene editing events.
In a mouse model harboring the human PiZ allele, the most severe genetic defect in AATD patients, Intellia first reduced expression of the defective protein using gene knockout. Three weeks following the PiZ allele knockout, Intellia inserted the normal human alpha-1 antitrypsin gene, resulting in stable (throughout 12 weeks of observation), therapeutically relevant circulating protein levels. In the study, a sustained reduction of the circulating PiZ protein levels of >98% was observed for over 15 weeks. This is the first in vivo demonstration of a non-viral delivery platform, enabling a consecutive dosing approach for achieving multiple genome edits in the same tissue of the same animal. Intellia’s oral presentation, titled “In Vivo Gene Knockout Followed by Targeted Gene Insertion Results in Simultaneous Reduced Mutant Protein Levels and Durable Transgene Expression,” will be given by Anthony Forget, Ph.D., on October 25, 2019. This presentation will be available on Intellia’s website at www.intelliatx.com.
Intellia’s Poster Presentations
WT1-Specific TCR Engineered Cell Therapy Studies
Intellia presented new in vitro data showing that CRISPR/Cas9-mediated genome editing for in locus insertion, combined with endogenous T Cell Receptor (TCR) knockout, leads to significant reduction in mispairing of endogenous and transferred TCR chains. This approach is expected to generate transgenic-TCR (tg-TCR) T cell therapies for hematological cancers and solid tumors. Results demonstrate a highly efficient reduction of >98% in endogenous TCR α and β chains while reaching >70% insertion rates of tg-TCRs without further purification. The poster titled “Engineering of Highly Functional and Specific Transgenic T Cell Receptor (TCR) T Cells Using CRISPR-Mediated In Locus Insertion Combined with Endogenous TCR Knockout,” was presented on October 24, 2019, by Birgit Schultes, Ph.D.
Researchers also presented in vitro data showing that a library of WT1-specific TCRs were generated, several of which Intellia is currently evaluating as part of its lead engineered cell therapy program targeting Acute Myeloid Leukemia (AML). This presentation, “Generation of a Library of WT1-Specific T Cell Receptors (TCR) for TCR Gene Edited T Cell Therapy of Acute Leukemia,” was presented on October 23, 2019 by Intellia’s collaborator, Erica Carnevale, Ph.D., IRCCS Ospedale San Raffaele.
Primary Hyperoxaluria Study
Intellia showed the continued progression of its modular platform capability using CRISPR/Cas9 to knockout either hydroxyacid oxidase 1 (Hao1) or lactate dehydrogenase A (Ldha), leading to a dose-dependent and persistent reduction of urinary oxalate levels in a Primary Hyperoxaluria Type 1 (PH1) mouse model. Data shows Ldha gene disruption also decreased LDH enzyme activity in the liver and did not impair the disposition of lactate in either wild type or renally-impaired mice. These results highlight the potential of editing genes in the glyoxylate detoxification pathway using a non-viral delivery approach as a one-time treatment option for PH1. These data were presented as a poster, titled “CRISPR/Cas9-Mediated Gene Knockout to Address Primary Hyperoxaluria,” by Sean Burns, M.D., on October 24, 2019.
Off-Target Screening Platform
Intellia demonstrated its approach to assess off-target activity to identify highly specific CRISPR/Cas9 guides. Results from targeted off-target sequencing in edited cells showed that biochemical off-target discovery approaches were the most sensitive and accurate. These data were presented as a poster on October 23, 2019, titled “In Silico, Biochemical and Cell-Based Integrative Genomics Identifies Precise CRISPR/Cas9 Targets for Human Therapeutics,” by Dan O’Connell, Ph.D.
About Intellia Therapeutics
Intellia Therapeutics is a leading genome editing company focused on developing proprietary, curative therapeutics using the CRISPR/Cas9 system. Intellia believes the CRISPR/Cas9 technology has the potential to transform medicine by permanently editing disease-associated genes in the human body with a single treatment course, and through improved cell therapies that can treat cancer and immunological diseases, or can replace patients’ diseased cells. The combination of deep scientific, technical and clinical development experience, along with its leading intellectual property portfolio, puts Intellia in a unique position to unlock broad therapeutic applications of the CRISPR/Cas9 technology and create a new class of therapeutic products. Learn more about Intellia Therapeutics and CRISPR/Cas9 at intelliatx.com and follow us on Twitter @intelliatweets.
Forward-Looking Statements
This press release contains “forward-looking statements” of Intellia Therapeutics, Inc. (“Intellia” or the “Company”) within the meaning of the Private Securities Litigation Reform Act of 1995. These forward-looking statements include, but are not limited to, express or implied statements regarding Intellia’s beliefs and expectations regarding its planned submission of an IND application for NTLA-2001 in mid-2020; its plans to generate preclinical and other data necessary to nominate a first engineered cell therapy development candidate for its AML program by the end of 2019; its plans to advance and complete preclinical studies, including non-human primate studies for its ATTR program, AML program and other in vivo and ex vivo programs such as its AATD program; develop our proprietary LNP-AAV hybrid delivery system to advance our complex genome editing capabilities, such as gene insertion; its presentation of additional data at upcoming scientific conferences regarding CRISPR-mediated, targeted transgene insertion in the liver of NHPs, using F9 as a model gene, via the Company’s proprietary LNP-AAV delivery technology, and other preclinical data by the end of 2019; the advancement and expansion of its CRISPR/Cas9 technology to develop human therapeutic products, as well as maintain and expand its related intellectual property portfolio; the ability to demonstrate its platform’s modularity and replicate or apply results achieved in preclinical studies, including those in its ATTR and AML programs, in any future studies, including human clinical trials; its ability to develop other in vivo or ex vivo cell therapeutics of all types, and those targeting WT1 in AML in particular, using CRISPR/Cas9 technology; the impact of its collaborations on its development programs, including but not limited to its collaboration with Regeneron Pharmaceuticals, Inc. or Ospedale San Raffaele; statements regarding the timing of regulatory filings regarding its development programs; and the ability to fund operations into the second half of 2021.
Any forward-looking statements in this press release are based on management’s current expectations and beliefs of future events, and are subject to a number of risks and uncertainties that could cause actual results to differ materially and adversely from those set forth in or implied by such forward-looking statements. These risks and uncertainties include, but are not limited to: risks related to Intellia’s ability to protect and maintain our intellectual property position, including through our arbitration proceedings against Caribou; risks related to Intellia’s relationship with third parties, including our licensors; risks related to the ability of our licensors to protect and maintain their intellectual property position; uncertainties related to the initiation and conduct of studies and other development requirements for our product candidates; the risk that any one or more of Intellia’s product candidates will not be successfully developed and commercialized; the risk that the results of preclinical studies will not be predictive of future results in connection with future studies; and the risk that Intellia’s collaborations with Novartis or Regeneron or its other ex vivo collaborations will not continue or will not be successful. For a discussion of these and other risks and uncertainties, and other important factors, any of which could cause Intellia’s actual results to differ from those contained in the forward-looking statements, see the section entitled “Risk Factors” in Intellia’s most recent annual report on Form 10-K as well as discussions of potential risks, uncertainties, and other important factors in Intellia’s other filings with the Securities and Exchange Commission. All information in this press release is as of the date of the release, and Intellia undertakes no duty to update this information unless required by law.
Intellia Contacts:
Media:
Jennifer Mound Smoter
Senior Vice President
External Affairs & Communications
+1 857-706-1071
[email protected]
Investors: | https://news.cambridgeonline.us/press-releases/intellia-therapeutics-presents-in-vivo-and-ex-vivo-data-at-the-2019-annual-congress-of-the-european-society-of-gene-and-cell-therapy-esgct-313319 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.