content
stringlengths 71
484k
| url
stringlengths 13
5.97k
|
---|---|
Through the study of four marine sediment columns taken at two different underwater Classic Maya sites identified as saltworks facilities in southern Belize, this research had the objective of provide some insights on the occupation of these sites and the formation of their archaeological record. The marine sediment studied in this research was composed of partially decomposed plant matter, inorganic minerals, and water in different proportions, with mangrove roots composing the major organic component of the mangrove peat. This research included macroscopic descriptions of the marine sediment, loss-on ignition of 32 samples uniformly distributed throughout the sediment columns to determine the percentage of organic content, and microscopic characterizations of samples throughout the column samples. The results obtained through loss-on ignition suggest clear patterns of organic content distribution throughout the marine sediment columns that, along with macroscopic and microscopic characterizations of the marine sediment, suggest the effects of human activities in the areas where the sediment was collected. Occupation levels at these sites were tentatively identified at 35 cm to 55 cm depth from the modern sea floor at Site 24, and 45 cm to 60 cm at Site 35. Since archaeological artifacts are found at the modern sea floor in these sites, bioturbation was likely an important element in the formation of the archaeological record at both underwater sites.
Date
2011
Document Availability at the Time of Submission
Secure the entire work for patent and/or proprietary purposes for a period of one year. Student has submitted appropriate documentation which states: During this period the copyright owner also agrees not to exercise her/his ownership rights, including public use in works, without prior authorization from LSU. At the end of the one year period, either we or LSU may request an automatic extension for one additional year. At the end of the one year secure period (or its extension, if such is requested), the work will be released for access worldwide.
Recommended Citation
Rosado Ramirez, Roberto, "Analysis of Marine Sediment of Prehispanic Maya Saltworks 24 and 35 in Paynes Creek National Park, Southern Belize" (2011). LSU Master's Theses. 2034.
https://digitalcommons.lsu.edu/gradschool_theses/2034
Committee Chair
McKillop, Heather I. | https://digitalcommons.lsu.edu/gradschool_theses/2034/ |
Project Recover Lead Archaeologist
Drew Pietruszka, Ph.D., is an underwater archaeologist at the Scripps Institution of Oceanography and Project Recover’s Lead Archaeologist.
The role of the Lead Archaeologist is to ensure that Project Recover incorporates the highest level of archaeological and forensic methods and documentation in all its activities.
Dr. Pietruszka designs, plans, and directs forensic and archaeological investigations to bring MIAs home. He leads the search, documentation, and excavation of underwater crash sites on a crash site.
“A part of our obligation as citizens is to give back to our society. There are many ways to serve. This particular mission is the way I’ve found to serve others. I feel we owe it to those who have given the ultimate to honor their sacrifice and, when we can, bring closure to their families.’Andrew Pietruszka, Ph.D.
Preparation for MIA Missions
Drew serves as the subject matter expert for all archaeological and forensic work conducted by the organization. The scope of his oversight is significant.
He ensures Project Recover’s field methods, project personnel, and reporting are in line with the MIA field’s best practices and compliant with international archaeological/forensic policies.
Drew ensures Project Recover complies with all local, state, national, and international regulations. He communicates and coordinates with US government agencies (including Defense POW/MIA Agency, Department of Defense, Department of State, and the Naval History and Heritage Command), foreign governments, and international agencies.
“While our focus is on recovering U.S. MIAs, it is important to remember that these crash sites are not only our history. They are also the history of the people who live where these sites are found. The wars significantly impacted their ancestors, just like they did ours. When we go into the field, we are guests in their country. We must respect local customs and laws and work together to meet all parties’ needs.”Dr. Andrew Pietruszka
Drew also works with Project Recover’s Lead Historian, Dr. Colin Colbourn, to develop a database of American MIAs worldwide. The database is a resource for gathering information about MIAs and evaluating potential search and recovery mission cases. The database has more than 700 cases, representing nearly 3,000 MIAs. (Have a family MIA? Contact Us)
Underwater Archaeologist: On-Site
Project Recover conducts two main types of field missions. Investigations focused on finding and documenting new sites associated with MIA cases and recoveries to excavate MIA remains from previously investigated sites.
When Drew is on a mission such as the 2021 MIA Recovery Mission in Palau or the 2021 Investigation in Vietnam, he is one of the scientists overseeing the mission.
Project Recover investigations utilize state-of-the-art technology like REMUS 100 autonomous underwater vehicles (AUV) equipped with a variety of sensors (sidescan sonar, magnetometer, camera, multibeam sonar, etc.) and remotely operated vehicles (ROV) to scan the seafloor and document sites.
Drew works with the team’s engineers, oceanographers, data specialists, and divers to apply these technologies and interpret findings. The survey uses a find, fix, and finish model as a framework for organizing the workflow. In this concept, daily operations provide data analyzed in the field by Drew and the team. Each day’s information guides the next phase of the mission’s operations. Throughout the survey, the right sensing technologies and undersea vehicles must be selected for each job.
“Often, there is a trade-off between resolution and the area we can cover. Typically, these two parameters are inversely proportional (sensors that “see” far underwater do so with reduced resolution), so there is a trade-off in sensing performance that we must constantly balance in the field.”
When a new site is discovered, Drew works with the team to document the site. The team can document the site using SCUBA divers or the team’s AUVs or ROV. Documentation often includes still images, video, acoustic imagining, measurements, and observations. The data gathered is predominantly focused on identifying the site and aiding future MIA recovery mission planning.
“I spent four years leading underwater recoveries with DPAA. When working with the Project Recover team to document a site, I’m always thinking about what I would want to know beforehand if tasked to carry out a recovery at this site. I ensure we capture that information and share it with our partners at DPAA.”
Leading a recovery mission, Drew relies heavily on his training as an archaeologist.
Initially, Drew needs to determine how deep the team will need to excavate in particular areas of the crash site. Drew may task a diver to collect sand from 0-20 cm deep in an assigned section. The team will put the accumulated sediment through a screen to see if crash-related artifacts exist.
We are constantly working to scientifically gather data, interpret it, and use it to assess the site and plan our next step in excavation.
After the team has a clear picture of the strata 0-20 cm deep, Drew tasks a diver to excavate at 20-40 cm deep.
Drew interprets the provenience of the artifacts and uses it to inform the strategy. Provenience is the exact location – horizontally and vertically – where an artifact is found on an archaeological or forensic site.
“The spatial relationship between objects is important,” Drew said. “If we find two sets of dog tags, for instance, our leading assumption is that the one nearest to a particular set of human remains will help us identify that MIA.”
As a result, Drew maps the crash site meticulously. First, he makes a scale computer drawing of the site. Then he guides the team to create an archaeological grid out of PVC pipe at the site.
Divers bring the PVC grid underwater and arrange it on the seafloor. In this way, the team has common points of reference. What Drew discusses with divers above water, divers see in the grid when they go below.
Drew keeps a daily excavation log, tracking what units are excavated and to what depth. He also tracks what objects are found, in what unit, and at what depth.
Underwater Archaeology: Reporting
Archaeologists often say, “one day in the field equals ten days in the lab.” The same rule applies to Project Recover missions.
As Lead Archaeologist, Drew spends a lot of time analyzing data and writing reports to share with Project Recover’s partners, including Defense POW/MIA Agency, Department of Defense, Naval History and Heritage Command, and International Historic Preservation Offices.
Many of the crash sites discovered by Project Recover are in bad shape, the wrecking process and years underwater having taken their toll. Once back from the field, Drew spends days pouring over all the pictures and videos the team collected at the site matching key components of the aircraft with representative samples located in Project Recovers library of parts catalogs and flight manuals.
Drew’s goal is to find portions of wreckage that can either identify:
- the type of aircraft,
- a specific aircraft, or
- If it is associated with any particular MIA. For example, if the pilot is MIA, where is the aircraft’s cockpit within the debris field?
Once Drew has solved as much of the puzzle as possible, he writes his findings in a comprehensive report. Once complete, he passes the report to DPAA. DPAA then evaluates the site for further work..
Childhood: Finding Passions
Drew grew up in New York, Illinois, Virginia, Tennessee, and Florida. Drew loved the outdoors as a child and explored the woods tirelessly with his brothers. They were thrilled to discover evidence of an old homestead at their home in Richmond, Virginia. They investigated the old chimney and brickwork foundation. Drew wandered through the old graveyard, wondering about the people and families who used to live there. Every piece of old farm equipment, rusted tin can, and old Mason jar further captured his imagination.
“I didn’t like to read as a kid, but these things brought history to life for me. I would always imagine who the people who made these things were. What were their lives like? Where were they from?”
Those early years playing in the forest and excavating for hidden treasures foreshadowed Drew’s future career as an archaeologist.
As a teen, Drew lived in Florida. There he discovered his love of water. He surfed and became a SCUBA diver there. It, too, would influence his choice later to focus on underwater archaeology.
A Turning Point: Blackbeard’s Ship and Underwater Archaeology
At the University of Central Florida, Drew majored in biology; however, he never lost his interest in the past. He took classes in archaeology and anthropology whenever his schedule permitted. As he was completing college, updates about the discovery of Black Beard’s ship, the Queen Anne’s Revenge, peppered the news.
It was an ‘aha’ moment for Drew.
Underwater archaeology was a novel field that he hadn’t previously known existed. It was perfect for Drew, blending his love of history with his passion for diving and the water.
He went to East Carolina University and earned a Master’s degree in Maritime Studies/Nautical Archaeology in 2005.
Pursuing a Doctorate & World of Travel
Drew went to Syracuse University to earn a Ph.D. in Anthropology focused on archaeology. He wanted the rigorous academics of combining anthropology with written and archeological records.
His doctoral program also opened up a world of travel.
Drew’s research focused on African-European contact in West Africa during the Atlantic Trade. Over three trips, he spent close to a year living there.
When his research permitted, he and his buddies traveled extensively, with a plane ticket, a Lonely Planet, and no particular plans. They hopped a local riverboat to Timbuktu, slept in the streets of Mali, and backpacked through Burkina Faso and Niger.
“One of the things I love about travel is it reminds you of how good humanity is.”
The Ghanaian People Prompt Desire for Service
Specifically, Dr. Pietruszka’s dissertation focused on excavating and interpreting two European ships discovered at Elmina, Ghana. Drew spent seven months living there continuously while carrying out his research.
He lived in a little local house in a small village. He ate local food and worked with the local fishermen every day. He got to know the community.
“It was a seminal change for me.”
Drew has traveled to more than 50 countries and said 99.9% of his interactions with people were affirming.
But Drew’s time in Ghana was eye-opening. There, he witnessed the extremes of both poverty and generosity. He was overwhelmed by the kindness of people who had little but always had something to give a guest in their home or village.
“When I returned home, I longed for service that would make my career more meaningful.”
Drew did not know then what purpose he would serve, but he knew life as an underwater archeologist alone would not be enough.
JPAC: Blending Underwater Archaeology with Service
When Drew returned to Syracuse to finish his doctorate, he contemplated how to incorporate a greater degree of service in his career.
He saw a brochure about JPAC, the Joint POW/MIA Accounting Command, now called Defense POW/MIA Accounting Agency (DPAA). Their mission is to give the fullest possible accounting of the country’s MIAs to their families and the nation.
A lightbulb went off for Drew.
Working to help bring MIAs home combined service with underwater archaeology.
Despite the tiny field of underwater archaeology, Drew had never heard of JPAC.
He searched online until he found an email address.
Then Drew wrote a short email introducing himself, his skillset, and why DPAA should hire him.
For six months, Drew heard nothing.
Then, out of the blue, he got a call. The phone call turned into an interview and ended with a job offer.
Within seven months, Drew finished his Ph.D. and moved to Hawaii to start as a Forensic Archaeologist with JPAC.
JPAC: Extensive Field Work
Drew began work as a Forensic Archaeologist with JPAC in 2011. He worked there for four years. Much of that time was spent in the field, away from home, overseeing recovery excavations. It was equally extraordinary and rewarding.
As a forensic underwater archaeologist, Drew helped oversee JPAC’s underwater operations. He planned and directed recovery operations. He helped implement improvements that increased the number of annual missions and led to more recoveries.
While there, Drew completed the five-month long US Navy Dive School, Second Class Diver training. He is one of only a handful of civilians to have completed the course.
With Drew’s help, JPAC’s underwater program was growing, reinventing itself to a new level of excellence. Drew was named interim director of DPAA’s Laboratory at Offutt Air Force Base, Omaha, Nebraska.
Drew’s family, however, was also growing, and his wife was starting medical school.
Project Recover: Lead Archaeologist
Pat Scannon, Eric Terrill, and Mark Moline were in the beginning stages of forming Project Recover. They had located the Punnell Hellcat and the Savage TBM Avenger in 2014 and knew they could accomplish great things in partnership.
It just so happened that the Project Recover finds landed on Drew’s desk at JPAC. In 2015 he led a team of military divers in the recovery at the crash site of Lt. William Q. Punnell. Drew’s time in Palau overlapped with Project Recover’s that year. It allowed him to meet Pat, Eric, and Mark in person. Drew also knew they could accomplish great things in partnership.
When Project Recover got funding from The Friedkin Group in 2015, Andrew Pietruszka, Ph.D., was the first person the team hired.
Drew was hired as an underwater archaeologist postdoctoral researcher with the University of Delaware’s Earth, Ocean, & Environment in 2016.
Then he was hired as an Underwater Archaeologist and Academic Program Management Officer at the Coastal Observatory Research and Development Center at Scripps Institution of Oceanography at the University of California, San Diego.
Today, Drew continues to work at Scripps and is the Lead Archaeologist for Project Recover.
Family, Food, and Fun
Drew and his wife Sarah live in Salt Lake City, Utah. Together, they have three children: Seeger, Sadie, and Jack. Sarah is a pediatrician completing a Sports Medicine fellowship at the University of Utah.
When not out in the field or researching MIA cases, Drew likes to spend his time with his family exploring Utah’s amazing outdoors. He loves snowboarding, hunting, fishing, and hiking in the surrounding mountains.
Drew’s other passion in life is food, particularly cooking. When Drew and Sarah first met, he used to make elaborate multi-course meals with creative presentation and plating. Drew still loves to cook with three children, but it’s not as fancy. Some of the kids’ favorites include chicken and dumplings, gumbo, and breakfast burritos (for dinner, of course). | https://www.projectrecover.org/about/dr-andrew-pietruszka-underwater-archaeologist/ |
After 20 years doing research, Girona Underwater Vision and Robotics team has become a benchmark in Europe for the design and construction of autonomous underwater vehicles, and the development of cutting edge software for the processing of visual and acoustic data.
Do you want to use our AUVs free of cost?
Developing research underwater robots since 1995 has given us the knowledge and experience to make autonomous underwater vehicles (AUVs) which are both reliable and versatile as research platforms.
Our research on mapping techniques has lead to several processing workflows for the generation of 2D and 3D photomosaics, sonar mosaics and bathymetries of high quality, accuracy and resolution.
Taking part on research projects often requires the development of new equipments. Because of that, we are also experts on designing underwater technology for custom solutions. | https://cirs.udg.edu/ |
Browsing by Author "Hamilton, Michael"
Now showing items 1-3 of 3
-
Adaptive autonomous underwater vehicles for littoral surveillance: the GLINT10 field trial results Kemna, Stephanie; Hamilton, Michael; Hughes, David T.; LePage, Kevin D. (NURC, 2012/05)Autonomous underwater vehicles (AUVs) have gained more interest in recent years for military as well as civilian applications. One potential application of AUVs is for the purpose of undersea surveillance. As research into ...
-
Antisubmarine warfare applications for autonomous underwater vehicles: the GLINT09 field trial results Hamilton, Michael; Kemna, Stephanie; Hughes, David T. (NURC, 2012/05)Surveillance in antisubmarine warfare (ASW) has traditionally been carried out by means of submarines or frigates with towed arrays. These techniques are manpower intensive. Alternative approaches have recently been suggested ...
-
Collaborative multistatic ASW using AUVs: demonstrating necessary technologies Hughes, David T.; Baralli, Francesco; Kemna, Stephanie; Hamilton, Michael; Vermeij, Arjan (NURC, 2009/12)Many research laboratories and several nations are showing an interest in the 'underwater networked battlespace' in which a combination of stationary and mobile underwater platforms communicate wholly or partially by the ... | https://openlibrary.cmre.nato.int/browse?type=author&value=Hamilton%2C+Michael |
Many radio device manufacturers specify a parameter referred to as the “maximum transmission range (in the open space)”. The term “in the open space” refers to a theoretical and ideal condition, in which the radio waves are propagated in a vacuum, i.e. in conditions not available to the standard wireless system users, e.g. intruder alarm systems.
The article discusses actual conditions of radio wave propagation in reference to ISM Industrial, Scientific, Medical equipment. The bands are mostly used by Class I devices (i.e. devices that may be operated without a radio licence) in industrial, scientific, medical and domestic applications. Currently, ISM bands are classified into several groups (Table 1).
Table 1. ISM band groups
Remember, the table shows frequency range allocated in the EU. Radio transmission in the ISM bands is limited, and selected channels may be assigned a specific use, e.g. monitoring. Other limitations include output power, channel width or band usage period. Detailed information are available in the ITU Radio Regulations, ITU recommendations and the National Frequency Allocation Table.
Currently, the most utilized ISM band is UHF 433 MHz used by many radio devices, and it is difficult to find a channel free from interferences in this band, in particular in large urban areas. The frequency range 433.05–434.79 MHz is intended for amateur applications. There is a risk that the intruder alarm devices operating in this band may be interfered by radio amateur stations operating at significantly higher powers, approx. 100 W; for comparison, the intruder alarm device power in this range does not exceed 10 mW. Most of the currently manufactured intruder alarm systems operate at higher frequencies, e.g. 868 MHz, 2.4 GHz or 5.8 GHz. At 868 MHz, the frequency occupancy factor is relatively low, in particular at 868–869.7 MHz.
Maximum transmission range: in actual conditions may be several times smaller than the maximum transmission range in the open areas. Wave attenuation can be affected by many factors.
The terrain can be separated into three basic types.
Fig. 1. Three basic terrain types: (1) space with obstacles, (2) semi-open space, (3) open space
The illustration shows three types of spaces, where communication can be planned. If the transmitting and receiving antenna are not optically "visible" to each other (for bands over 300 MHz), communication is not possible (Fig. 1.1). In Fig. 1.2 (semi-open space), the communication may be unsatisfactory, since the signal power is low.
The radio wave at ISM frequency may be attenuated by different objects, e.g.:– internal building wall – attenuates signals by 10–15 dB,– external building wall – attenuates signals by 2–38 dB,– floor – depending on material used - attenuates signals by 12–27 dB,– window – attenuates signals by 2–30 dB depending on material used and the glazing cavity gas composition. Older windows filled with air do not attenuate the signal too much, however the windows filled with noble gas attenuate the signal to a greater degree. For comparison - the signal attenuated by 30 dB is a thousand times weaker than the initial signal (before attenuation).
The best solution is to install the receiving and transmitting antenna without any obstacles between them (Fig. 1.3), and if the transmitting devices are located outside, the receiving antenna should also be installed outside.
Built-in antennas may reduce the signal power due to their design.
It is an elliptical zone, with the straight line between the transmitter and the receiver being its axis (Fig. 2). In practice, the first Fresnel zone directly affects the transmission range.
Fig. 2. First Fresnel zone
When designing the communication system, remember to avoid any obstacles, either natural or artificial in the first Fresnel zone (the wave will be attenuated and the communication may be interrupted). The zone is an area in which most of the signal energy is transmitted.
A phenomenon describing the wave (e.g. radio) propagation in the medium. In case of wireless sensors, alarms and monitoring systems, propagation defines the radio wave propagation in the air.
Any obstacles with smooth edges will scatter (attenuate) the radio waves to a significantly higher degree than any sharp-edged obstacles. Wave propagation may also be affected by the weather: strong winds, rain or storm may result in electrostatic charges being accumulated on the antenna and may cause interferences. The more heavy the rain, the worse the conditions of wave propagation, especially at higher frequencies.
The maximum transmission range and the wave propagation mode directly depend on its frequency. The higher the frequency, the lower the signal susceptibility to interferences and the lower the range.
The radiated power (ERP or EIRP) has a significant effect on the maximum transmission range. ERP is determined by the transmitter power, transmitting antenna gain, transmitter-transmitting antenna path attenuation (mainly affected by the quality of cables and connectors used). ERP is a combination of those three parameters. A radiation power is the actual power at the transmitting antenna output. The maximum transmission range may also be affected by the height of the electric centre of the antenna (meters above sea level) and altitude of the antenna mast base (meters above sea level).
The receiver can be defined by its receiving antenna gain, sensitivity and selectivity.
Receiving antenna gain affects the received signal quality and increases the maximum transmission range. Sensitivity of the receiver is a measure of its ability to receive weak signals. Selectivity is a measure of its ability to separate the usable signal from many other signals (e.g. interferences and noise).
Similar to the hardware factor, the maximum transmission range may also be affected by: height of the electric centre of the antenna (meters above sea level), altitude of the antenna mast base (meters above sea level) and receiver-receiving antenna path attenuation for the receivers. | https://shopdelta.eu/maximum-radio-communication-range_l2_aid900.html |
O.V. Vibornov, S.V. Kozlov, E.A. Spirina, E.A. Petrova, E.V. Larin
The opportunity of signal strength prediction is the important part in design of mobile communication system. In city case, the signal propagation from transmitter to receiving point may come by straight wave, by diffraction, by dispersion and reflection from obstacles, so there is multiray propagation in the city. Different statistical signal strength prediction models are usually used to solve this task, but only several of them allows calculate rays parameters. Fumo Ikegami model, which was made for ideal city with regular height buildings, is one of them. This model tells that the sum of power of only two rays is enough to estimate the received signal strength, this rays are diffracted and reflected from nearest building. This article tells about the adequacy of overall using this model and using this model for random city, for Kazan. There is analysis of signal propagation for CDMA One mobile communication system for two districts of Kazan city, in this article. The conclusion of our researches is the adequacy of using only two rays, when we receive CDMA One signals with some precision, and usability of expressions for calculating the ray intensity with propagate attenuation on the path from transmitter to receiver.
References:
- Kirjushin G. V., Maslov O. N., SHatalov V. G.Proektirovanie, razvitie i ehlektromagnitnaja bezopasnost setejj sotovojj svjazi standarta GSM / Pod red. O. N. Maslova. M.: Radio i svjaz. 2000.
- Babkov V. JU., Voznjuk M. A., Mikhajjlov P. A. Seti mobilnojj svjazi. CHastotno-territorialnoe planirovanie. SPb.: Spb GUT, 2000.
- Makoveeva M. M., SHinakov JU. S. Sistemy svjazi s podvizhnymi obektami: Ucheb. posobie dlja vuzov. M.: Radio i svjaz. 2002.
- FumioIkegami, Susumu Yoshida, Tsutomu Takeuchi, Masahiro Umehira, Propagation factors controlling mean field strength on urban streets // IEEE transactions on antennas and propagation. Aug. 1984. V.ap-32. № 8.
- Hata M. Empirial formula for propagation loss in land mobile radio services // IEEE Trans. Venicular Technology. 1980. V. 29. №3.P.317–325.
- Okumura Y., Ohmori E., Kawano T., Fukuda K. Field Strength and Its Vriability in VHF and UHF Land Mobile Service // Rev.
Elec / Comm. Lab. 1968. V. 16. IX-X. P. 825 – 873.
- Parsons J. D. The Mobile Radio Propagation Channel. John Wiley & Sons, Inc., 1992.
- Walfich J., Bertoni H. L. A theoretical model of UHF propagation in urban environments // IEEE Trans. On Antennas and Propagation. Dec. 1988. V. 36. P.1788–1796. | http://radiotec.ru/article/12550 |
Detailed 3D Maps for the City of Brussels
Nowadays city stakeholders (companies, inhabitants, urban planners, city agencies etc.) have an increasing need to plan, view and simulate various aspects of the environment, weather, growth, and density of a city. The issues that require cities consider 3D models include shadow or noise impact, wave propagation, a line of sight analysis, urban planning, flood analysis, etc. Only with detailed, immersive visualizations and maps, can stakeholders truly get a picture of how these various dynamics affect the city.
The Centre of Informatics for Brussels Capital Region (CIBG) needed a detailed 3D model of buildings in the city of Brussels to accurately model the mobile phone signal reach and penetration throughout the city. With a model, the CIBG could determine if and where additional telecommunication antennas were needed.
For the required level of detail, simple block models would not suffice. CIBG needed detailed and accurate roof structures to adequately model signal wave penetration and come up with the best location to place antennas. The city engaged Avineon to create a process to integrate various source data format including vector data, Digital Terrain Models (DTM), ortho-photos, stereo-imagery and Light Detection and Ranging (LiDAR) data into an immersive 3D model of the entire region comprised of over 250.000 buildings and over 200 bridges.
Using newly acquired aerial images, Avineon first updated the 2D maps which served as a basis for the 3D model. All differences between vector data and the aerial images were identified, marked and sent to a photogrammetry team that did the update via stereo-restitution. This resulted in an up-to-date 2D base map.
In a multi-step process, Avineon then created 3D building footprints, perfectly aligned with the updated 2D map, and corresponding to the height of the Digital Terrain Models, as well as detailed roofs and walls. For most buildings, Avineon developed a semi-automatic process based on the LiDAR data, which was acquired at the same time as the aerial images that were used for the 2D Map updates. For modeling landmarks and complex buildings such as churches, unique skyscrapers, and sports stadiums, Avineon used manual modeling techniques based on their expertise in stereo-restitution.
Though not part of the scope of the work with CIBG, Avineon could have also used oblique image integration to create additional texture of buildings in and around the city. This could eventually allow a city add a level of immersion that accounts for detailed views of various scenarios and simulations but this was ultimately unnecessary for CIBG’s study.
With a comprehensive strategy, Avineon was able to deliver a rich 3D model of the city of Brussels to the CIBG for their specific purposes. The model was also made available for public and private use for city stakeholders of Brussels.
To see maps for yourself and read more on the project, click to go to the article in LiDAR Magazine. | https://avineonlab.com/city-visualization/ |
Basic Methods of Propagation
Introduction:
They have an impact on the wave propagation in a mobile communication system. The most important parameter, Received power is predicted by large scale propagation models based on the physics of reflection, diffraction and scattering
Three Basic Propagation Mechanisms:
REFLECTION:
Occurs when a signal is transmitted, some of the signal power may be reflected back to its origin rather than being carried all the way. When reflection occurs, it can be seen that the angle of incidence is equal to the angle of reflection for a conducting surface as would be expected for light. When a signal is reflected there is normally some loss of the signal, either through absorption, or as a result of some of the signal passing into the medium.
|
|
Figure 1.14 Sketch of three important prorogations Mechanism
Occurs through Large buildings, earth surface
DIFFRACTION:
The apparent bending of waves around small obstacles and the spreading out of waves past small openings.
Occur through Obstacles with dimensions in order of lambda
SCATTERING :
It Is a general physical process where light, sound, or moving particles, are forced to deviate from a straight trajectory, by one or more localized non-uniformities, in the medium through which they pass
Occur through Obstacles with size in the order of the wavelength of the signal or less
Foliage, lamp posts, street signs, walking pedestrian, etc. | http://www.faadooengineers.com/online-study/post/ece/wireless-communication/79/basic-methods-of-propagation |
Most of my time is spent writing about digital design and verification and I am about to start running a series on intellectual property (IP). Sadly, I can’t say that I am terribly surprised that I have had no submission on the subject of analog IP.
Why is that? We know analog IP does exist and probably should be in wider use, but for some reason it is another area where analog design and integration has stuck around in the old world. It's time that changed.
Clearly, without abstractions in the analog world, the only IP that exists is hard IP. This is IP that has already been through the layout process, has been characterized and verified on a specific foundry process, and is ready to be used. This means that the business is unlikely to be as profitable as the digital IP business where soft IP can be shipped and the layout and routing left to the chip designer. But then again, hard IP, which clearly contains more of the back-end effort, is probably not as cheap as soft digital IP. Some of these analog IP blocks contain very deep technical knowledge about the interface or subject that is unlikely to be shared by too many people in the industry, meaning that it is possibly a better implementation than most companies would be able to create on their own.
These days, almost every complex chip targeting cellphones, tablets, set-top boxes, automotive, and many others will contain one or more IP blocks. There may be blocks such as SerDes, PLLs, PHYs, DACs, and ADCs. An increasing number of sensors are being included these days, and this phenomenon will likely increase as additional chips are made for the Internet of Things (IoT). These chips require very high levels of integration and ultra-low power consumption especially since many of them have to scavenge power. Assuming that design times do not increase, there will be a growing pressure to start using pre-existing analog blocks.
The first question to come up probably is can we leverage a previous design and thus do internal reuse. This is probably going on a lot already and resembles soft IP in the digital world. You may have the design, but you will have to redo the layout, and it is likely that several other things will change between applications as well. This could include:
- different clock frequency
- different voltage
- new technology node that will affect the sizing of everything and thus all of the parasitics
- changes in chip layout that may affect signal integrity.
But there could also be changes in the requirements for the device that may require small changes in the design. As soon as this happens, all of the verification that had been performed goes out the window and you really are back to square one.
Many foundries supply a library of analog blocks, and there is a clear reason why they would like you to use those — it makes it more difficult for you to go to a different fab. Using these could also complicate the ability to get a second source if that is important to you. But there are third-party analog IP suppliers. Some of these are small companies that may give pause for thought, but large established IP companies also offer analog blocks. There are also companies that will customize blocks for your specific needs.
Given the many sources, what are the biggest issues or obstacles that you see preventing more widespread adoption of analog IP?
Related posts: | https://www.planetanalog.com/where-art-thou-analog-ip/ |
Welcome back to a new episode of how to become ohsome. Yes, you’ve read the heading correctly. We are really talking about a snake in a notebook on another planet. If you are familiar with one of the most used programming languages in the GIS world, you might already know by now which snake is meant here. We will show you in a Jupyter Notebook how you can use Python to make ohsome queries and visualizations in one go. And we will do that through using our global ohsome API instance. In case you’ve just read the combination of “global” and “ohsome” for the first time, better get up-to-date and read this blog post.
As already mentioned, Python is a widely used programming language, especially in the GIS world, to perform spatial analysis and create visualizations like diagrams. Combining Python code, explanations and visualizations in one go, a Jupyter Notebook is a useful tool to achieve just that. It is already in use within other projects in HeiGIT (e.g. avoid obstacles with ORS). So we thought it was time to make Jupyter Notebooks ohsome.
To give you a little teaser of what is in that notebook, the following shows a visualization plus a piece of Python code that is used to create it. The diagram displays the count of OSM elements having the OSM tag building for different points in time for the three cities Heidelberg, Mannheim and Ludwigshafen.
And here is a part of the Python code that is used in the notebook to create the visualization above:
data = [trace1, trace2, trace3] layout = go.Layout( title = 'Number of OSM buildings in Heidelberg, Mannheim and Ludwigshafen', barmode = 'group', legend = dict(orientation = "h") ) fig = go.Figure(data = data, layout = layout) py.iplot(fig, filename = 'groupBy')
The complete Jupyter Notebook with all the code and explanations can be found here. As always, if you want to give us feedback or have any questions, [email protected] is the best way to get in touch with us. Further Jupyter Notebooks with more examples will follow soon. Stay ohsome! | https://heigit.org/de/how-to-become-ohsome-part-4-handling-a-snake-in-a-notebook-on-another-planet-2/ |
An increased emphasis on the happiness of our population has established an ongoing conversation surrounding happiness ideology and positive psychology which has, in turn, fostered unrealistic ideals and expectations regarding our outlooks and demeanors within public spaces. Happiness is increasingly being used as an indicator of economic efficiency, and corporations are recognizing the importance of creating a positive work environment. Though this revelation is great in theory, the constant conversation surrounding happiness in corporate, social, and personal settings has established a certain expectation regarding the mood and demeanor of our population. We are constantly presented with situations in which displaying negative sentiments or emotions, especially within social spaces, is hugely frowned upon as our mood has an adverse effect on those surrounding us. This creates an overwhelming, national sentiment that it is not ok to not be ok, and that expressing feelings of anxiety, stress, or negativity is not socially acceptable. Not only do these understandings result in an inauthentic representation of emotions and mental health across our population, but it also creates space for capitalist interest as companies take advantage of what can now be understood as the Happiness Industry.
Ultimately, the rise of technology and social media has created increasingly curated social media content and, in turn, has impacted our interpretations of happiness. Our online representations have shifted with the Web 2.0 and social media boom as we have all placed an increased emphasis on how we represent ourselves on the internet. This shift from casual online representations towards more curated and deliberate content has created falsified perceptions of reality as the nature of social media use has become much more performative. The comparative nature of social media platforms has created space for constant competition and sentiments of inferiority. Within this, we have become witness to an overwhelming shift in our personal perceptions of happiness as social ideals of happiness have come to be increasingly defined by content presented across social media platforms. This paper goes over the positive psychology movement and the ways in which incessant positive thinking can be detrimental to a population, while also creating space for the development of an industry surrounding happiness. Finally, I make assertions regarding contemporary social media use, articulating the competitive nature of social media and, within that, social media's impact on perceptions of happiness.
Recommended Citation
Abel, Sarah, "Perpetuating Happiness in Social Spaces" (2021). CMC Senior Theses. 2719.
https://scholarship.claremont.edu/cmc_theses/2719
This thesis is restricted to the Claremont Colleges current faculty, students, and staff. | https://scholarship.claremont.edu/cmc_theses/2719/ |
If you do anything professionally related to online technology, you understand the immense amount of data you need to sort through each day. There are the daily content roundups, blogs to read, Facebook posts and to check, tweets to scroll through and news sites. That doesn’t include whatever else arrives in your in-box. I literally cannot keep up with all that I want to know about social media technology and its use for engagement, fundraising and advocacy.
It’s really too much to know. That’s when I began trusting the curators.
Trusting the curators was a strategy I employed to begin to figure out what to read, what I needed to read, and what others whom I trusted thought was important to read. We cannot read it all. We cannot begin to imagine trying to read it all. We must trust to the curators.
Trusting others to curate content has become my primary means for gathering relevant information about social media and particularly nonprofit technology.
Finding good curators
I think of a good curator as someone who is knowledgeable about the sector and who provides consistently trustworthy content. Mai Overton has a good addition: that a good curator is “someone who consistently provides valuable insight.” I often find curators through their blogs or recommendations from others, and then begin follow them on Twitter or Google+ to find what they are curating.
There are many strategies for finding new, quality, relevant content. Several social media platforms will allow you to sort through volumes of information, and isolate it by topic, idea or curator. My top three preferred platforms for sourcing and sorting through qualified curators and their content are Twitter, Scoop.it and Google Plus.
Twitter as curation tool
At a 140 Conference in Tel Aviv in 2010, a panelist was asked by another panelist to list the names of blogs she reads. She replied that her Twitter stream is now her blog reader, and she’s not embarrassed to say so in public. Jessica Kirkwood, an ultra-connected colleague of mine, shared with me recently that she “uses Twitter lists to curate and follow people who are tweeting out relevant information for her to read.” She no longer uses an RSS feed reader at all.
While Twitter is a constant stream of information, much of it includes data and links to articles with data. The key to using it as a curatorial platform is to carefully create lists. I use Twitter lists and TweetDeck columns to focus on the people who are tweeting out relevant information about nonprofit technology, community management, nonprofit technology and fundraising. I prefer to limit my lists to fewer than 100 people per list.
Scoop.it as a way to follow and organize topics
I love the curation platform Scoop.it. Scoop.it is best described as a board for curated topic-specific content. I curate a Scoop.it board on Facebook research and best practices, for example, and “scoop” articles from around the Web that are relevant to my curated topic. I follow 38 other topics, on everything from LinkedIn Tips to Nonprofit Digital Engagement to Just Story It, a board about storytelling. There are also a few boards about content curation, such as this one. Every day, Scoop.it emails me a summary of some of the new articles uploaded to boards that I follow. If you have only 30 minutes each day to read the latest news in your industry, start with Scoop.it; it serves up the newest information in a very readable format.
Google+: Viewing streams on circle at a time
I love that I can curate who I follow through Google+ circles. I curate my circles by type of expertise, to fine-tune the content and knowledge information. Some of my circles are nonprofit technology, social media (not nonprofit), fundraising, data geeks and gadget geeks. I’ll often view my Google+ stream through the lens of one circle at a time in order to find content trending topics and look at what my curators are thinking about. A benefit of Google+ is the ability to engage in robust discussion about an article or idea.
Delicious as a way to organize and archive
I use the social bookmarking platform Delicious to bookmark anything on the Web that I want to remember and go back to. You can follow users or “stacks” (content-specific bookmarks) or search for information by tags. For example, this is a stack that Avi Kaplan created for anyone wanting information or examples of online organizing.
Pinterest: The ideal tools for visual curation
The newest shiny social media platform, Pinterest, has become a darling of the social media world. If the content you want is visual, this is an ideal platform for you. Howard Lake created this Pinboard called Charities’ Facebook Page Covers, for example.
There are so many other curation platforms that I haven’t named. What’s important is to find what works for you, and why. What’s your curation strategy?
This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported. | https://www.socialbrite.org/2012/04/19/curation-tools-to-help-you-cope-with-info-overload/ |
EU Regulation 2019/1150 (the P2B Regulation) was transposed into Irish legislation in July 2020 through S.I. No. 256 of 2020 European Union (Promoting fairness and transparency for business users of online intermediation services) Regulations 2020. It is the first set of rules aimed at creating a fair, transparent and predictable business environment for smaller businesses and traders on online platforms.
It is important that the relevant online platforms which fall within the remit of the Regulation are aware of their obligations and take steps to ensure that they are compliant. The CCPC is the designated body with responsibility for monitoring compliance with, and enforcement of, the P2B Regulation.
Who does the Regulation apply to?
The Regulation applies to two types of online providers known as online intermediation service providers (OISPs) and online search engine providers (OSEPs). OISPs use information and communication technologies to facilitate interactions (including commercial transactions) between business users and consumers. OSEPs play an important role in the commercial success of all businesses that operate websites.
Examples of OISPs and OSEPs include online marketplaces, social media and creative content outlets, application distribution platforms, price comparison websites, platforms for the collaborative economy as well as online general search engines.
Why was the Regulation introduced?
Platforms can offer easier access to cross-border markets and are crucial for the success of some businesses. The European Commission found that while the gateway position of online platforms enables them to organise millions of users, it also opens the possibility of unilateral trading practices that are harmful, and against which no effective redress is available for the businesses using these platforms.
The P2B Regulation was introduced to create a fair and transparent business environment for smaller businesses and traders with online platforms. By creating a more transparent and fairer marketplace, EU consumers will also ultimately be more protected.
What are the obligations under the Regulation?
Online providers and business users need to be aware of a variety of requirements around terms and conditions, suspension or termination of services to business users, ranking of search results, and the setting up of internal complaint-handling systems by online providers.
Under the P2B Regulation, both OISPs and OSEPs have specific obligations that they must meet. For more information regarding these obligations, please visit the business resources section of our website.
What is the CCPC’s role?
The CCPC is the body responsible for monitoring compliance with the P2B Regulation, and has the right to take action before the courts in order to stop any non-compliance by an OISP or OSEP.
Under the Competition and Consumer Protection Act 2014, the CCPC can carry out an investigation into any suspected breach of the P2B Regulation. Should an online platform be found to have breached these provisions, the CCPC can issue Compliance Notices or begin criminal proceedings. | https://www.ccpc.ie/business/platform-to-business-regulation-p2b-what-online-platforms-need-to-know/ |
The proliferation of digital channels opens up new opportunities for businesses that want to deepen relationships with existing clients and attract new customers. Social media platforms and the widespread use of mobile devices provide an affordable way to push out a steady stream of content to always-connected consumers and generate new business. However, many companies that are making the change from print-centric communication to digital content are finding that the transition isn't simple. The processes that are in place for creating, approving, publishing, distributing and tracking print-centric content don't scale when faced with omni-channel digital content.
Jun 29, 2010
Ex Libris Group announced the general release of the Primo Central mega-aggregate index of scholarly materials. Primo Central enables Primo users to search for global and regional materials as well as materials in their local library collections. More than 280 institutions in 30 countries that already use Primo can now avail themselves of the services of Primo Central. MetaLib customers will be able to configure Primo Central as a MetaLib search target. | http://www.econtentmag.com/Articles/News/News-Item/Ex-Libris-Releases-Primo-Central-68048.htm |
By Anthony Hughes
Returning to the UK after five years with Newgate’s Abu Dhabi operation in the UAE has been a big adjustment in many ways not just marvelling at the rain. One of the most interesting things I have noticed is how many misconceptions exist about the region.
Many people have been to Dubai and marvelled at the architecture, the engineering feats that have built world class attractions and the opulence and wealth. Some know that Abu Dhabi produces oil and has global cultural icons such as the Louvre, the Shiekh Zayed Mosque and even the impressive F1 circuit. What many people do not know is the Gulf states’ local populations are predominantly young, well-educated and tech-savvy. Around 60 percent of Gulf populations are under 25, outward-looking and as comfortable using English as they are in Arabic. As with many other societies the smart phone is ubiquitous. Smartphone penetration in the UAE for example is at 200 percent so almost everyone has at least one phone or most likely two and some times more. This has had a resounding impact on the region, which has seen, among other things a massive shift from ‘old-fashioned’ traditional media to digital and social media platforms in a matter of years.
As you might expect video content (viewed on mobile devices) is the most important content type particularly for engagement of younger generation who spend an average of 72mins per day watching video content according to the Dubai Press Club. YouTube is viewed daily by half of young Arabs (50 percent) according to the Arab Youth Survey and KSA has some of the highest rate of users on YouTube globally. The fastest growing video segment is “short-form (few minutes long), amateur digital content — curated by Arab youth and distributed on platforms like Whatsapp, Facebook, Snapchat and Instagram. However, this is also changing, in a recent survey nearly seven in ten national internet users say they changed how they use social media due to privacy concerns. The desire for greater privacy, combined with the dominance of mobile and social video, needs to be taken into account in any effective social media strategy in the Middle East.
Sources: | https://www.secnewgate.co.uk/blog/what-is-digital-comms-really-looking-like-in-uae/ |
- Outrage over the allegations of a leaked objectionable video of women students of Chandigarh University.
Steps taken by government agencies to tackle objectionable media content
- The first step for the investigation agency is to identify the social media intermediary through which the objectionable content (Picture, video, voice message, etc) is being spread.
- Generally, investigation agencies depend on the information of the first arrested accused, who discloses the first method through which the content was shared.
- In complex cases in which the content is being shared on multiple social media platforms, the investigating agencies communicate with all social media intermediaries including Facebook, WhatsApp, Twitter etc.
- Once the social media intermediaries are identified, the investigation agency communicates with the regulating authorities/headquarters of these intermediaries. There are two methods for communication.
- Routine matters in which there is no urgency are pursued through Emergency disclosure methods in which the agency seeks the phone number and IP address of a device which was used to create/record the objectionable/vulnerable content.
- Matters related to national security, threats to human lives and child abuse are followed through Emergency response methods; in this case, the regulating authorities of social media take prompt decisions over applications sent by the investigation agency.
- The Union Government has recently notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to deal with the objectionable content on social media.
About Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, was notified by the Central government on February 25, 2021, relates to the digital news publishers, including websites, portals and YouTube news channels, and Over The Top (OTT) platforms, which stream online contents such as web series and films.
- It is jointly administered by the Ministry of Electronics and IT, and the Ministry of Information and Broadcasting.
- The Rules provide for a code of ethics to be followed by digital news publishers and OTT platforms; A three-tier grievance redress mechanism, which includes:
- Self-regulation by publishers at the first level
- Self-regulation by Self-regulating bodies of the publishers
- An oversight mechanism by the Central government
Key Features of the Rules
- Social media intermediaries, with registered users in India above a notified threshold, have been classified as significant social media intermediaries.
- They are required to appoint certain personnel for compliance, identification of the first originator of the information on its platform, and identify certain types of content.
- They need to appoint a Nodal Contact Person for 24x7 coordination with law enforcement agencies. Such a person shall be a resident of India.
- Appoint a Resident Grievance Officer who shall perform the functions mentioned under the Grievance Redressal Mechanism. Such a person shall be a resident of India.
- Publish a monthly compliance report mentioning the details of complaints received and action taken on the complaints.
- The Rules prescribe a framework for the regulation of content by online publishers of news and current affairs content and audio-visual content.
- A 3-tier Grievance Redressal Mechanism: Social media intermediaries shall appoint a Grievance Officer to deal with complaints and share the name and contact details of such officers.
- The grievance Officer shall acknowledge the complaint within twenty-four hours and resolve it within 15 days from its receipt.
- Ensuring Online Safety and Dignity of Users, Especially Women Users: Intermediaries shall remove or disable access within 24 hours of receipt of complaints of contents that expose the Privacy of individuals.
- Such a complaint can be filed either by the individual or by any other person on his/her behalf.
- Voluntary User Verification Mechanism: Users who wish to verify their accounts voluntarily shall be provided with an appropriate mechanism to verify their accounts and provided with a demonstrable and visible mark of verification.
- Giving Users An Opportunity to Be Heard: Users must be provided with an adequate and reasonable opportunity to dispute the action taken by the intermediary.
- Removal of Unlawful Information: An intermediary upon receiving actual knowledge should not host or publish any information which is prohibited under any law in relation to the interest of the sovereignty and integrity of India, public order, friendly relations with foreign countries etc.
- This Code of Ethics prescribes the guidelines to be followed by OTT platforms and online news and digital media entities.
- Self-Classification of Content: The OTT platforms would be required to self-classify the content into five age-based categories; U (Universal), U/A 7+, U/A 13+, U/A 16+, and A (Adult). | https://www.iasgyan.in/daily-current-affairs/objectionable-content-on-the-social-media |
The Metaverse has attracted huge interest in the internet industry since it first appeared. According to the definition of the Metaverse given by Mystakidis (2022), it is an immersive environment for multiuser combining physical reality and digital virtuality. This virtual space is witnessed to be built on the basis of the rearranged characteristics from different media, including blogs, social networks, and interactive digital entertainment software, therefore, the metaverse can be identified as a part of Web 2.0 applications (Cagnina & Poian, 2008, p. 379). However, the characteristics of web 2.0 lead to a multi-layered and complex governance system and have implications for the metaverse’s approach to governance.
2. How Web 2.0 is constituted and governed?
Before exploring the governing possibility of the metaverse, it is vital to know what drives the immersive internet and how it is governed in the current relationship between the Internet and power.
Web 2.0 is the second stage in the development of the Web, which uses the Internet more interactively and collaboratively to create a space for users to interact socially and contribute their collective intelligence (Murugesan, 2007). As a technological paradigm, web 2.0 shows the public how to integrate technology, business strategies, and social trends as a mode of operation, in which the need for data remains strong. (Murugesan, 2007). For example, the software can be improved after release based on data collected from the user community. Data is already the most valuable resource in the world, far more valuable than oil (Rijmenam, 2022). However, in the web 2.0 era, the centralization of data and the power to collect it has allowed large technology companies to take control and control our lives (Rijmenam, 2022), and it accelerates the current trend towards platformization of the internet. Moreover, the unlimited growth of power that accompanies the internet economic boom has resulted in a rise in the voice of tech companies and an unchecked demand for data. As large technology companies that have dominated the web 2.0 economic model, they have been challenging the regulators. This was confirmed when Meta, the metaverse technology company owned by Facebook, wanted to shut down Instagram and Facebook services in Europe following the new data transferring legislation in Europe (Talks, 2022).
The platforming and monopolization of web 2.0 have prompted large technology companies to expand their power in order to meet the enormous demand for data and it is often accused of invading personal information for commercial purposes, such as illegally stealing or abusing users’ private data for algorithmic recommendations. The balancing of the relationship between the commercial sphere and the personal space in web 2.0, therefore, requires governance and scrutiny by external forces.
The decentralized nature of content distribution and the bottom-up nature of the creation of web 2.0 may give rise to the misconception that the Internet cannot be regulated. However, an exploration of the regulatory system in a broader sense observes that the Internet is regulated and managed on multiple levels (Flew et al., 2019), including the regulation of digital platforms and content moderation.
- regulating the digital platforms
- In order to break the monopoly of digital platforms run by large technology companies, including Facebook, Google, and YouTube, on the internet distribution market, governments often reduce platform dominance by creating policies and regulations to increase competition, such as anti-trust actions to encourage Facebook and Instagram to become independent companies, or focusing on strengthening the social obligations of the major platforms (Flew et al., 2019).
- content moderation
- The government restricts direct user access to content at a national level, regulating internet content by building access barriers
and introducing regulations. In China, direct government restrictions on content access constitute a unique Chinese internet system, the most far-reaching of which is the Golden Shield Project. This project restricts Chinese internet users’ access to digital platforms in most capitalist countries in order to keep people away from Western ideologies, with the aim of maintaining a stable political order (Pingp, 2011).
- The government restricts direct user access to content at a national level, regulating internet content by building access barriers
-
- Digital platforms are also socially responsible for the content they publish and distribute from their websites, and are required to review internet content to avoid users accessing harmful online information such as fake news, violence, extremist content, and cyberbullying. The US government has given technology companies broad exemptions by law to directly determine the choice of content on their platforms (Flew et al., 2019).
3. Metaverse is special in the web 2.0 ecosystem
Metaverse as the immersive environment for multiuser, mainly focuses on constructing extended reality (XR) which includes Virtual Reality (VR),
Augmented Reality (AR), and Mixed Reality (MR) (Mystakidis ,2022). In a metaverse where users are bound only by their creativity and the resources at their disposal and not by the constraints of the real world, they can create digital real estate, collectibles, and entertainment (Rijmenam, 2022). If Metaverse is to follow web 2.0, it will be distinctive from the previous concepts and platforms in web 2.0. According to Rijmenam (2022), there are two main characteristics of the Metaverse.
- Interoperability
- Interoperability is the ability for users to bring the value they create within one platform to another, including physical and digital assets (Rijmenam 2022). The greater the interoperability, the greater the contribution of the metaverse to society
- Decentralization
- Despite the vision of creating a fully decentralized and deconcentrated web space, web 2.0 has suffered from monopoly of power due to the dominant landscape by large technology companies. The metaverse can fix the flaws of web 2.0 and create a metaverse that is not controlled by anyone and is owned by everyone (Rijmenam 2022).
- In a decentralized metaverse, the use of cryptography makes data immutable, verifiable, and traceable, without the need for media intermediaries to manage the true origin of information (Rijmenam 2022).
4. Adaptive changes to the governing of the Metaverse
However, the metaverse operating in web 2.0 will be characterized by similar problems as in the web 2.0 ecosystem.
- Governments need to avoid the risks of trading in the metaverse and establish a secure regulatory system for data collection system in the metaverse by developing policies and regulations, rather than focusing on restricting user access to content and establishing an anti-monopoly system
- Due to the highly interoperable nature of the metaverse design, users’ assets can be transferred between different platforms. However, the lack of a monitoring system may lead to the leakage of transaction data and increase the risk of fraud, and virtual transactions represent a high level of insecurity. Therefore, privacy and data protection will be even more important than the current system.
- Tech companies need to agree to protect the privacy and abandon multiple advertising business models
- Surveillance-based behavioral targeting advertising extends from the web to the virtual space, in which users cannot distinguish how advertisements monitor steps in the user interaction experience (Li & Forum, 2022). The introduction of an advertising business model into the metaverse would cause the metaverse to lose its data security and decentralized business landscape. The database created by the advertising campaign could potentially lead to a re-centralization of power among the metaverse platforms to collect information about users and thus attempt a data monopoly.
5. Conclusion
The metaverse has become a new trend in the development of social media, with its unique features of Interoperability and decentralization. However, it also faces many problems, just like web 2.0. In this article, government regulation of data systems and changes in traditional advertising strategies by tech companies would be possible governing ways.
Reference list
Cagnina, M. R., & Poian, M. (2008). Second Life: A Turning Point for Web 2.0 and E-Business? In Interdisciplinary Aspects of Information Systems Studies. (pp. 377–383). Physica-Verlag HD. https://doi-org.ezproxy.library.sydney.edu.au/10.1007/978-3-7908-2010-2_46
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
Heilweil, R. (2020, December 9). Why the FTC and states’ Facebook antitrust lawsuits say owning Instagram and WhatsApp make it a monopoly. Vox. https://www.vox.com/recode/22166437/facebook-instagram-ftc-attorneys-general-antitrust-monopoly-whatsapp
Li, C., & Forum, W. E. (2022, May 25). How to build an economically viable, inclusive and safe metaverse. World Economic Forum. https://www.weforum.org/agenda/2022/05/how-to-build-an-economically-viable-inclusive-and-safe-metaverse
Murugesan, S. (2007). Understanding web 2.0. IT Professional, 9(4), 34–41. https://doi.org/10.1109/mitp.2007.78
Mystakidis, S. (2022). Metaverse. Encyclopedia, 2(1), 486–497. https://doi.org/10.3390/encyclopedia2010031
Pingp. (2011). » the great firewall of china: Background torfox. Torfox A Stanford Project. https://cs.stanford.edu/people/eroberts/cs181/projects/2010-11/FreedomOfInformationChina/the-great-firewall-of-china-background/index.html
Rijmenam, M. van. (2022). The Future Is Immersive. In Step into the Metaverse. Wiley. | https://www.arin2610.net.au/2022/10/15/if-the-metaverse-is-to-follow-web-2-0-how-should-it-be-governed-15/ |
The proliferation of digital channels opens up new opportunities for businesses that want to deepen relationships with existing clients and attract new customers. Social media platforms and the widespread use of mobile devices provide an affordable way to push out a steady stream of content to always-connected consumers and generate new business. However, many companies that are making the change from print-centric communication to digital content are finding that the transition isn't simple. The processes that are in place for creating, approving, publishing, distributing and tracking print-centric content don't scale when faced with omni-channel digital content.
Nov 11, 2003
Endeavor Information Systems has announced the selection of the Voyager integrated library management system for the Plough Memorial Library at Christian Brothers University in Memphis, TN. Voyager replaces the current DRA Classic system, a first generation automated library circulation and cataloging system. This is the first Memphis-area independent implementation of the Voyager system. With more than 112,000 volumes, CBU's Plough Memorial Library serves 1,800 users in 25 major areas of study. A participant in the Southeastern Library Network (SOLINET), the Library shares cooperative lending privileges with eight other Memphis-area academic libraries in the metropolitan area. Special collections at the Plough Memorial Library include 125 years of archives of the University, as well as documents of Mother Teresa, research from Bolivian missionary Rev. Joseph John Higgins, materials covering the study of Napoleon Bonaparte, and the personal archives of Memphis-area civic leader Edward F. Barry. | http://www.econtentmag.com/Articles/News/News-Item/Christian-Brothers-University-Chooses-Voyager-5737.htm |
It has been a little while since I sat here and wrote on the blog, so many reasons why I haven't written here but the real truth is that for quite some weeks a topic would come into my mind then whenever I went to write it truly wasn't me writing, it was as if I was pretending or trying to put something out there that was curated, perfect, that everything was hunky-dory, a guru I am not.
It is perhaps a trap that many of us in the wellbeing industry can fall into, in an age of social media where the platforms are designed so that we are constantly consuming content and then just chuck it away like an old newspaper, it is no wonder we can fall into this trap of creating more and more and more, but for what? Does anyone really take it in? Am I contributing to this overconsumption of information that isn't anchored and is just flying in the wind?
These questions stopped me in my tracks, brought me back to why I started all of this in the first place, they have led me back to perhaps the one and only anchor we all have, the truth. It is how I started this journey, it is perhaps how it will end.
I came to this work from a place of truth, of vulnerability, a deep sense of time ticking by, to support others to embrace life in all it glory and all its challenges, for I had seen the alternative, a life lost to missed opportunities, to wanting it all back to try again but of course, it was too late, a reverse legacy if that makes sense, it is what spurred me on to change.
The truth is this is deep work, it has to be anchored in truth and honesty. This inner work of strength, resilience, compassion, authenticity, reparenting, navigating the challenges of life, it goes way beyond the content we read and consume about wellbeing. Those of us that tread this path of truth, to see life in a different perspective, are in a minority for it is not always an easy journey, at times it is lonely. It is a bold and radical move in an age of competition, comparison and exhaustion, but wow is it liberating.
Many of us have been through some challenging stuff, perhaps we still are, but to hold it all with integrity and authenticity, to say this is who I am and I am working on it, it is perhaps the best thing we can offer this world, our truth.
I am no guru, I make mistakes, I fall down, I get back up, sometimes I have nothing to say, I am truly imperfect but it all comes from the heart, the vulnerable truth.
Here we are spinning on this rock, 150 million killometres away is a star shining so bright, supporting our life here on Earth. The miracle that we were born into this precise time when the earth was positioned just far enough away from the sun, but near enough to support life. The tree outside my window here, with its complex systems that turn light energy into chemical energy that creates a waste product called oxygen, truly amazing. Zooming out like this can be really helpful to gain a new perspective on things, an appreciation of this one-time offer we have.
You are so wonderfully unique, you owe yourself the truth, the truth of what it is YOU need in this life, it is so fleeting, so precious, don't hold back.
With Love
From Roger. | https://www.rogerhuntlifecoaching.co.uk/post/truth |
Social media provide communication networks for their users to easily create and share content. Automated accounts, called bots, abuse these platforms by engaging in suspicious and/or illegal activities. Bots push spam content and participate in sponsored activities to expand their audience. The prevalence of bot accounts in social media can harm the usability of these platforms, and decrease the level of trustworthiness in them. The main goal of this dissertation is to show that temporal analysis facilitates detecting bots in social media. I introduce new bot detection techniques which exploit temporal information. Since automated accounts are controlled by computer programs, the existence of patterns among their temporal behavior is highly predictable. On the other hand, patterns emerge in human temporal behavior as well since humans follow cyclic schedule. Therefore, we need a solution that can differentiate between these two classes by learning patterns of each. For my Ph.D. dissertation, I focus on the temporal behavior of social media users for the following purposes: 1. to show that high temporal correlation among users is common with automated accounts, 2. to design a system, called DeBot, which detects highly correlated accounts, 3. to improve the time complexity of calculating correlation for real-time applications, and 4. to deploy deep learning techniques on temporal information to classify social media users. | https://digitalrepository.unm.edu/cs_etds/92/ |
Australia rebukes Google for blocking local news content
Sydney, Australia, Jan 15 (efe-epa).- The Australian government has condemned Google for blocking local news content from its search results and ordered the tech giant to pay media outlets for the news instead.
The government is considering the approval of a law to make online platforms such as Google and Facebook pay for news content they obtain from media companies, amid a loss of advertising revenue to tech companies.
Australian Treasurer Josh Frydenberg said on Thursday that digital giants “should focus on paying for original content, not blocking it,” in response to Google’s move to block Australian news sites from some local users from its search engine as part of an experiment.
In December, the Australian government finalized a law to make technology companies negotiate payments to local media for the content that they post on their digital platforms.
If the parties cannot reach an agreement, the government will appoint an intermediary to decide the amount to be paid, according to the proposed bill.
Google and Facebook have expressed their opposition to the legislative changes saying the media outlets also benefit from the digital traffic to their own websites.
The measure comes in response to recommendations made by the Australian Competition and Consumer Commission (ACCC) in a report in December 2019 on the impact that digital search engines, social media platforms, and other digital content aggregation platforms have on competition in media and advertising services markets.
In its report, the ACCC had said that digital platforms earned as much as 51 percent of the public spending in the sector in 2017 after doubling their share in the last five years at the cost of local print media, whose share dropped from 33 to 12 percent in the same duration.
Facebook, the most popular social network in Australia, boasts 17 million monthly users, around 68 percent of the total population in the country, while second-place Instagram, a subsidiary of Facebook, has 11 million users.
In 2017, Google collected 90 percent of search traffic generated on computers in Australia and 98 percent on mobile phones. | https://www.laprensalatina.com/australia-rebukes-google-for-blocking-local-news-content/ |
Abstract: The freedom of social media platforms to post and share daily activities is being misused by threatening users as they post the suspicious and fake content on social media for personal or organisational advantage. This demands to generate a system that can detect suspicious content and their respective user accounts. In this paper, an ant colony optimisation based system for threatening account detection (ACOTAD) is proposed. The connections among the different Twitter users are determined by the pheromone substance secreted by ants on the edges of the path travelled. Better the quality of pheromone indicates the strong connection of one user with another. This research work considers the experimentation on Twitter based Social Honeypot Database. The evaluated results in terms of precision, recall, f-measure, true positive rate, and false positive rate indicate the superiority of the proposed concept in comparison with existing techniques.
Online publication date: Fri, 15-Nov-2019
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Intelligent Engineering Informatics (IJIEI):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe. | https://www.inderscience.com/offer.php?id=103626 |
Broceliand SAS is a privately-held French company providing a social Web curation tool and discovery platform enabling users to store, organize, and retrieve Web content that they find interesting.
Company Info
- HQ Location
- France
- Year Founded
- 2008
- Employees
- 13
- Website
- www.pearltrees.com
Products by Broceliand SAS
Company Headquarters
Related News
-
Apr 24
Your Intranet Is Only as Good as Your Metadata
Without a strong taxonomy and metadata on the back end, your intranet will only cause frustration.
-
Apr 24
Understanding the Needs of Machine Learning Engineers
AI and machine learning skills are in short supply. If your organization is considering bringing these in house be sure you have what's needed to be successful.
-
Apr 24
Rita Zonius: Help Employees Understand 'How Change Will Benefit Them Personally'
Rita Zonius talks the relationship between intranets and ESNs, keeping momentum going post launch and more in our latest Digital Workplace Leaders series.
-
Apr 24
6 Enterprise Blogging Platforms Worth Considering
Creating content for your target audience is smart marketing and these 5 blogging platforms can help stream line the process for your business.
-
Apr 24
What is a Digital Experience (DX) Ecosystem?
A guide to help make sense of the current state of DXPs and the digital experience ecosystem. | https://www.cmswire.com/d/broceliand-sas-o001100 |
JASON'S AND CALVIN'S DO'S AND DONT'S FOR JOB INTERVIEWS - ASK THIS, BUT NOT THIS. BEFORE, DURING AND AFTER THE INTERVIEW. TIPS AND TRAPS.
THE HIRING PROCESS – DO’S AND DON’TS
BEFORE THE INTERVIEW:
DO:
Create a uniform hiring process for all applicants:
Draft interview questions in advance based on the essential duties and requirements of the position. Develop the “answers” and assess applicants based on these objective criteria. Ask all applicants the same questions. These measures guard against informal, subjective assessments entering human-resource decision-making.
Use an application form to screen applicants:
Application forms are simple tools to supplement an application with relevant information. These forms should include a basic job description and a Statement of Qualification for the applicant to affirm their qualifications for that job; this will assist in screening applicants who overstate their qualifications.
Prepare a panel of interviewers, if possible, to assess applicants according to the hiring process:
A panel assessing an applicant’s answers allows for a more diverse and objective perspective. A panel will also provide multiple witnesses to the interview, one of whom should record thorough notes.
Offer to accommodate an applicant, if he or she requires accommodation, before the interview:
Applicants are generally responsible to inform potential employers of their needs and to provide adequate detail for the employer to respond accordingly. Once aware of the need to accommodate, employers should co-operate with the applicant in creating an interview or hiring mechanism that addresses the duty to accommodate arising under both human rights legislation and Ontario’s Accessibility for Ontarians with Disability Act, 2005, S.O. 2005, c. 11, as amended.
Exercise caution when actively recruiting an applicant from a long-term employment position:
Employers should be cautious when engaging in active recruitment of applicants who are employed in a stable, long-term position. Applicants who are induced to terminate their stable, long-term employment for a new opportunity may have a lengthened term of service with their new employer.
DON’T:
Make hiring decisions using informal, ad hoc. processes or decision-making:
While an informal conversation with an applicant may be appealing, an uncontrolled, subjective process can lead to subconscious bias and, in some cases, discrimination allegations. Having a plan and a written procedure before an interview will give structure and objectivity to the interview process.
Be unprepared:
An interviewer who is unprepared for an interviewee will tend to focus on a person’s superficial characteristics rather than the interviewee’s merit.
Use social media screening without the consent of the applicant and without considering whether you need such personal information:
An employer must obtain an applicant’s consent to collect their personal information. Personal information on social media is no different. An employer should not attempt to skirt privacy rules by using their personal account to screen an applicant or rely on a third party to conduct the screening.
Rely on the information on social media to the exclusion of traditional sources of personal information:
In general, employers should be wary that the information obtained on social media may be unreliable or inaccurate, and is usually unnecessary.
Ask for reference contacts without intention to contact them:
Asking for references is an indication that those references will be contacted. An employer who makes a hiring decision without making use of information that would have been available through a reference check may become open to legal liability for information they ought to have known.
DURING THE INTERVIEW:
DO:
Ask an applicant about his or her qualifications, relevant experience, training and previous positions:
Human rights and privacy laws do not limit the right of employers to obtain legitimate information about the people they may hire. All interview questions and topics must be designed to elicit job-related information concerning the applicant’s relevant knowledge, skills and ability to perform the key duties of the position.
Describe the job requirements, such as overtime, weekend work or travel:
Framing questions in terms of job requirements is an effective way of removing discriminatory elements in questions.
Ask the applicant to affirm their qualifications:
An applicant should be asked to review the Statement of Qualification included in the application form and to sign that statement if they have not done so already.
Take notes, take notes, take notes:
Taking and retaining notes and other written records of the interview will provide contemporaneous evidence in any potential discrimination claim before a human rights tribunal or the Courts. While taking notes cannot immunize employers to claims, once started, such evidence can be a powerful tool to defend against a claim
DON’T:
Ask questions that provide information regarding a prohibited ground of discrimination:
The following is a non-exhaustive list of general topics to avoid in an interview:
-
Race, colour, ancestry or place of origin:
If you need information about an applicant’s immigration status, simply ask whether the applicant is legally entitled to work in Canada. Avoid asking other questions related to a person’s educational institution, last name or any clubs or affiliations that are designed to indicate their race, ancestry or place of origin.
-
Citizenship:
Employers may not ask about a person’s citizenship unless Canadian citizenship or permanent residency is a legitimate job requirement. In all other cases, employers should restrict their inquiry to whether the applicant is legally entitled to work in Canada.
-
Religious beliefs or customs:
Employers may not ask about a person’s religious beliefs or customs. If you need information about when an applicant can work, ask whether he or she can work overtime or weekends if that is a legitimate job requirement.
-
Gender identity and sexual orientation:
There is rarely (if ever) a reason you need to know an applicant’s sexual orientation. Questions about a person’s personal relationships should be completely avoided in almost all cases. Gender identity-related questions should never be asked.
-
Marital or family status:
Instead of asking about a person’s family or marital status, simply ask if the applicant can work the hours required of the position or if they are able to travel or relocate.
-
Physical or mental disability:
Avoid asking about an applicant’s general state of physical or mental health or any history of sick leaves, absences and workers’ compensation claims. Employers may, however, ask the applicant whether they are able to perform the essential duties of the position and describe the physical and mental requirements of the position.
-
Gender:
Avoid questions about gender, including questions about pregnancy, breastfeeding, childcare arrangements and plans to have children.
-
Age:
While employers may ask an applicant for their birthdate upon hiring, the age of the applicant is rarely relevant unless there is a question as to whether the applicant has reached the legal working age, which varies from province to province.
-
Criminal or summary convictions:
In general, employers may ask the applicant about their criminal record where there is a legitimate reason to know, such as when the job involves a position of trust or working with vulnerable persons. If this is need-to-know information, require a police and judicial matters check as a condition to hiring the interviewee.
-
Former names:
Avoid asking a person about their former names unless needed to verify previous employment and education records. Avoid asking about names to determine someone’s origin, maiden name or whether the person is related to another person.
-
Language:
What languages an applicant speaks may cross the line if they are really disguising questions about race, place of origin or ancestry. The exception is, obviously, where the ability to communicate in certain languages is specifically required for the position.
-
Source of income:
It is recommended that employers avoid asking about an applicant’s source of income, as this is irrelevant, and some sources have a social stigma attached to them, such as social assistance, disability pension and child maintenance.
-
Genetic characteristics:
Employers should avoid asking an applicant about the results of a genetic test (23andme, Ancestry, etc.) and should avoid making decisions based on that applicant’s genetic traits, including traits that may cause or increase the risk to develop a disorder or disease.
Ask questions designed to elicit irrelevant information or information unrelated to the legitimate job requirements:
Privacy laws require that employers only collect personal information that a reasonable person would consider appropriate in the circumstances. Again, the employer must only do so with the consent of the applicant. The best practice is to only collect information that is reasonably necessary to make a hiring decision.
AFTER THE INTERVIEW:
DO:
Keep the interview notes and documentation for as long as possible:
Employers should keep all materials from the hiring process for as long as necessary to comply with applicable legislation and protect themselves from any possible litigation. At a minimum, it should be two years from the date of the initial interview.
Ask the selected individual(s) for further information:
Once hired, it is permissible to ask a person for further documentation necessary to maintain and establish the employment relationship if there is a legitimate need for that information. When an offer of employment is accepted (or conditional on certain checks being completed with the consent of the individual), it will generally be necessary to collect an employee’s birth date, social insurance number, personal contact information and all other personal information needed to establish the relationship, including information needed to enroll the employee in benefits plans and payroll.
_________________________________
This is a summary only, intended to be for your general information only. We recommend that you contact us, or other qualified employment law counsel, for specific advice that may apply to, or be helpful for, any specific interview you conduct, or employment offer you may wish to make, in future, including with respect to your hiring and recruiting practices generally. | http://wardlegal.ca/31594064144604 |
List of Employer's Work Rights
Every relationship involves conflict and compromise, and this holds true for the employers and their employees. Employees have extensive rights in the United States, but so do employers, and employer rights can have a significant effect on employees. If an employee finds a workplace rule intolerable, he may need to comply with the restriction, negotiate with the employer for a change or find a new job.
What Can Employers Consider in Hiring?
Company owners, or their designated representatives such as HR recruiters or hiring managers, have a right to define job roles, and they can set hiring standards and criteria that will ensure that the best person is selected for a job. This means that an employer has the right to set standards regarding:
- Education level: An employer can decide to set a minimum level of educational achievement as a criteria for hiring, such as a high school diploma or MBA. Employers may also choose not to hire a candidate who is educationally overqualified for a position, such as someone with a Ph.D. applying for a job that only requires a bachelor's degree.
- Professional credentials: Some jobs may require professional licenses, certifications or other credentials.
- Job experience: Employers can prefer candidates that have a specific amount of job experience.
- Personality and character: The interview process allows hiring managers and other company representatives to get to know a candidate and observe his behavior. An employer can consider whether the candidate would be a good fit with the company culture.
Tip
While employers have a lot of leeway when making hiring and advancement decisions, they cannot discriminate against employees or applicants on the basis of protected characteristics such as race, religion, or sex. Federal and state laws, as well as municipal ordinances, can vary when defining characteristics that are protected by anti-discrimination rules. Employers, and employees, should be aware of the law in their area.
Do Employer Rights Include "At-Will" Termination?
With the exception of Montana, all states allow employers to terminate workers "at-will," which means that the employer does not have to cite wrongdoing or prove that the employer has good reason for letting the worker go. While anti-discrimination laws prohibit firing a worker because of protected characteristics, it is usually perfectly legal for an employer to dismiss an employee because the worker "just doesn't fit in."
However, some human resource professionals note that arbitrary terminations can, in some instances, come back to haunt employers. This is because a worker who has been fired without previous warnings, and who has an otherwise clean work record, could claim that that her termination was discriminatory and file a complaint or lawsuit. Employers that want to avoid being the target of lawsuits take time to develop a termination process, such as requiring warnings before an employee can be fired, along with a policy of keeping good employee records that document performance and behavior.
Can a Company Force You to Work Overtime?
Yes, employers can impose mandatory overtime on employees. Ideally, mandatory overtime should be limited to emergency situations, but this is not always the case in practice. Employees who have a disability that prevents them from working extended hours may be able to request an accommodation under the Americans with Disabilities Act (ADA) exempting them from having to work overtime.
Are Dress Codes Legal?
Employers have the right to establish dress and grooming standards for employees. These standards may be necessary for safety reasons, because the employer wishes to maintain a certain level of decorum in the workplace, or to project a specific brand image. Employees do, however, have rights to challenge or request an exemption to dress and grooming standards, under some circumstances:
Facial hair: Many employers have rules about men's facial hair. While it is usually legal for an employer to require that male employees be clean-shaven or to have only a mustache, there may be some cases in which an employee can request an exemption. For example, some religious traditions require adult men, who are or who have been married, to grow a beard. If there is no health or safety reason for the clean-shaven rule, these men may be able to claim a religious exemption to the grooming code.
Other exemptions could be issued on the basis of health concerns or race. For example, many men have skin conditions that makes regular shaving painful and damaging to the skin. These men can request an exemption to grooming codes by providing a doctor's note, documenting a skin condition.
Sex-based standards: It is legal for employers to set different dress and grooming standards for men and women, as long as the standards do not place an undue burden on one sex over the other. This means that an employer can require sex-specific uniforms for male and female employees. California is the one state in which employers must allow women to wear slacks instead of a skirt if that is the employee's preference.
Clothing choices: Employers can establish a dress code that reflects their company's culture and brand image. Those who maintain a relatively open dress code may not be able to arbitrarily ban certain prints, colors or styles if doing so could constitute racial or ethnic discrimination. Because of the real possibility of a policy being construed as discriminatory, it is not unusual for businesses with otherwise casual dress codes to specify one color scheme for everyone.
Can Employers Listen to Employee Conversations or Search Employees' Desks?
While most employers prefer to cultivate a culture of trust and respect in the workplace, there are situations in which an employer may need to monitor an employee's communications or to search an employee's storage space. While both scenarios can be upsetting to an employee, courts usually back an employer's right to monitor and control an employee's use of phones and office furniture.
Monitoring conversations: Employers generally have the right to monitor employee phone conversations, although there are some limitations on this right. An employer can monitor business conversations for quality control purposes, but may be required to notify both parties that the call is being recorded or monitored. Technically, an employer is not supposed to listen in on personal phone calls, but can listen in long enough to determine whether the call is personal or business-related. If a business has a policy against making or taking personal calls at work, the employer does have the right to discipline the employee for not following company rules.
Searching employee desks and file cabinets: In most cases, an employer has the right to search an employee's desk and file drawers. This furniture belongs to the employer and the employer usually has access to it at any time. It is not unusual for an employer to send a company representative to either clean out a terminated employee's desk or to observe the employee clean out his work desk.
Some states or municipalities may have laws that limit employer searches. Union agreements may also provide employees with some privacy protections.
What About Employers and Social Media?
Many workers regularly use social media, and they may have concerns about how an employer might make use of social media posts and content sharing. Here are some common scenarios for employer monitoring of employee social media usage:
Job applicants: A recent study showed that 70 percent of hiring managers review social media profiles and posts when considering job candidates. While there is some controversy over the efficacy of this approach within the human resources community, job applicants should be aware that it is happening, and that they should consider updating or securing their social media profiles accordingly.
Social media use at work: Employers have a right to expect workers to be performing work-related tasks while on the clock. This means that an employer can restrict worker access to social media sites, through firewalls. In many instances, it is also legal for an employer to monitor employee communications through the employer's networks.
Social media posts outside work hours: There been several cases in which individuals have made offensive social media posts that have attracted the attention of other platform users. In some cases, the original poster has been identified by others, who then proceed research where the poster lives, works and goes to school. This process, sometimes called doxxing, has resulted in employees losing their jobs.
Because of at-will employment, as well as clauses in some employment contracts, it may be legal for an employer to discharge an employee for outside conduct, including posting social media content. However, some state laws may restrict employers from taking action against employees for off-duty conduct, when the employer can't prove that the employee's actions actually harmed the employer's reputation or business.
Tip
The intersections of employment law, social media, and technology are still quite new, and the courts still grapple with many questions regarding employer and employee rights. Employers and employees should seek legal advice, if they have concerns about social media and communications in an employment context.
Can Employers Compensate Employees Unevenly?
Employers can pay workers differently, who are performing the same job but are doing the same job differently, as long as the decision to offer one worker lower compensation then another is not based on illegal discrimination. For example, it would be illegal for an employer to pay a woman less than a man for performing the same work, simply because the employer felt that men should be paid more.
However, an employer could opt to pay an employee more than her colleague, for a nondiscriminatory reason. For example, an employer may opt to pay an employee more than her colleague who is doing the same job, because the higher-paid employee gets more work done, makes fewer errors and is better liked by her colleagues. In this case, the employer sees the value of the better-performing worker, and wants to do what he can to retain her.
Can an Employer Take Steps to Protect Company Information?
Employers have a right to protect confidential and proprietary information, including trade secrets, future business plans and employee data. Some employers require that new hires sign a nondisclosure agreement, which stipulates that the employee has an obligation to not reveal sensitive information that employees might learn while on the job. HR professionals often recommend that companies reinforce NDAs through refresher courses and periodic reminders of what employees are obligated to keep confidential, even after an employee is terminated or moves on to another job.
References
- EEOC.gov: Types of Discrimination
- SHRM.org: 'Employment at Will' Isn't a Blank Check to Terminate Employees You Don't Like
- FindLaw.com: At-Will Employee FAQ's
- Lawyers.com: Your Work-Related Appearance: What Are Your Rights?
- AmericanBar.org: Employment Privacy: Is There Anything Left?
- FindLaw.com: Privacy at Work: What Are Your Rights?
- Privacy Rights Clearinghouse:Workplace Privacy and Employee Monitoring
- SHRM.org: Legal Trends: Social Media Use in Hiring: Assessing the Risks
- SHRM.org: When Two Workers Doing the Same Job Earn Different Pay
- FindLaw.com: Protecting Trade Secrets
Resources
Writer Bio
Lainie Petersen is a full-time freelance writer living in Chicago. She holds a master’s degree in library and information science from Dominican university and spent many years working in the publishing, media and education industries. Her writing focuses on business, career and personal finance issues. Her work appears on a variety of sites, including MoneyCrashers, Chron, GoBankingRates and 8th & Walton News Now. | https://work.chron.com/list-employers-work-rights-7437.html |
Click here to access the Discrimination in the Hiring Process in PDF.
Job Postings
Section 11 of the B.C. Human Rights Code (the “Code”) prohibits an employer from publishing a job posting that expresses a limitation, specification, or preference as to a protected characteristic unless the limitation, specification, or preference is a bona fide occupational requirement.
Protected Characteristics
The protected characteristics under the Code are race, colour, ancestry, place of origin, Indigenous identity, political belief, religion, marital status, family status, physical or mental disability, sex, sexual orientation, gender identity or expression, age, or a criminal or summary conviction that is unrelated to the job.
Bona Fide Occupational Requirement
A bona fide occupational requirement must:
- Have a legitimate job-related purpose;
- Be adopted on a good-faith belief that the standard is necessary to fulfill the legitimate job-related purpose; and
- Be reasonably necessary to achieve the legitimate job-related purpose. To prove that the standard is reasonably necessary, an employer must show that it cannot accommodate the candidate (or others sharing the candidate’s protected characteristic) without suffering undue hardship.
For example, an employer can advertise to hire only women for a position as an intake worker at a shelter for abused women.
Occupational requirements that are not bona fide:
- Relate to incidental duties instead of essential parts of the job.
- Are based on coworker or student preferences and exclude persons because of characteristics protected by the Code.
- Rely on stereotypical assumptions linked to protected characteristics, such as disability, race, or sex, to assess an individual's ability to perform the job duties.
- State that the job must be performed only in a certain way even though reasonable alternatives may exist.
Having a clearly defined job description and an understanding of the essential requirements of the job provides a solid basis for designing standards, providing accommodation, assessing the performance of candidates, and making hiring decisions.
Applications & Interview Questions
The Code does not prohibit employers from asking questions that relate to a protected characteristic. However, the Human Rights Tribunal has found such questions to be discriminatory in some cases.
For example, in one case, the employer asked the candidate (who was interviewing for a waitressing position) about her age, marital status, and whether she had kids. The interview ended shortly after she answered these questions and she was not hired. The Human Rights Tribunal found that, without an explanation for why the questions were relevant, the questions were inappropriate as the answers might be used for discriminatory purposes. It did not matter that the employer did not intend to discriminate.
The Human Rights Tribunal recognized that an employer may have legitimate concerns about a candidate’s availability for shifts. However, such concerns should be addressed through direct questions about availability, rather than questions about protected characteristics. Employers should avoid making assumptions that a parent will be less committed to their work, for example, or that a young woman will go on maternity leave shortly after starting a job.
As a general rule, an employer should only ask what is necessary to make a hiring selection on the basis of skills and merit. However, an employer can ask about protected characteristics in order to assess whether the candidate meets a bona fide occupational requirement.
An employer should aim for a fair process that focuses on each candidate’s ability to perform the essential job duties. Best practices for interviewing include:
- Having a multi-person panel conduct interviews. Ideally, the interview panel should reflect the diversity available in the organization.
- Developing set questions in advance, and asking all candidates the same questions. The questions should be based on the job’s essential duties and bona fide occupational requirements.
- Creating an answer guide, before interviews start, showing the desired answers and a marking scheme.
- Requiring each member of the interview panel to record and score the candidate’s responses against the answer guide.
McGregor v. Morelli and Quarterway Hotel, 2006 BCHRT 277
Selection Criteria
Using objective criteria helps employers avoid making decisions based on subjective considerations such as whether the person exhibits “confidence” or is viewed as “suitable”. Employers who rely on these kinds of subjective assessments are vulnerable to claims of discrimination. Further, hiring decisions based on informal processes are more likely to lead to biased decision-making. For example, conducting an interview by chatting with the candidate to see if they share similar interests and will fit into the organizational culture may present a barrier for persons who are or appear to be different than the dominant norm in the workplace.
Using objective criteria may still result in discrimination if it excludes, restricts, or prefers some persons because of a protected characteristic. For example, a written test for a job that does not require strong writing skills may screen out persons who speak English as a second language.
While the same hiring process should be used for all candidates, employers should keep in mind that some candidates may require accommodation during the hiring process (e.g. for tests).
Hiring Decisions
Section 13 of the Code prohibits an employer from refusing to employ a person because of a protected characteristic, unless the refusal is based on a bona fide occupational requirement. See “Job Postings” above for the list of protected characteristics and the criteria for bona fide occupational requirements.
The decision-making process should be uniform, consistent, transparent, fair, unbiased, comprehensive, and objective. Answers provided in an interview or test should be scored against pre-set criteria that are based on the essential job requirements. Once a hiring decision is made, an organization should be able to document non-discriminatory reasons for hiring or not hiring each candidate.
Bias or stereotypes in the decision-making process may lead to eliminating candidates on the basis of grounds protected under the Code. The following list provides a few examples of hiring decisions that may be tainted by discriminatory considerations:
- Rejecting applicants because they do not match the organization’s “image” or “fit” the organization’s culture. This could disadvantage persons identified by race and race-related grounds, older applicants, persons with disabilities, or other people who are easily identified as not belonging to the dominant group.
- Not hiring someone due to a perceived lack of “career potential”. This requirement tends to adversely affect older applicants, especially when they are applying for entry-level type jobs.
- Refusing an applicant who has “too much experience” or who is “overqualified”. Turning away candidates who are “overqualified” may sometimes have an adverse effect on older candidates, people who are seeking to re-enter the workforce after lengthy absences (such as people with disabilities or who have caregiving responsibilities), and newcomers to Canada.
- Assuming that a person is not suitable without fully assessing their qualifications. Persons with disabilities may be affected by “social handicapping” when they are presumed to be unable to do the job, even though their disabilities are not relevant. This may also affect older candidates, women, and racialized persons.
- Eliminating applicants because their backgrounds contain gaps. This can be a particular problem for women who have re-entered the workforce after childrearing and have had to retrain. This may also be a barrier for persons with disabilities who were out of the workforce for an extended time for medical reasons.
- Viewing an applicant as unsuitable because they needed accommodation in the hiring process. When making hiring decisions, employers should not consider whether a person has requested accommodation during the hiring process.
- Perceiving that an applicant is trouble or will be disruptive because they have objected to discriminatory comments or conduct in the interview. It is retaliation for a qualified applicant to be penalized for reacting to discriminatory comments or conduct related to a Code ground in an interview. For example, an employer asks a candidate whether she is single. She says that is not relevant and asks that the interview focus on her qualifications. As a result, she is viewed as not having “people skills” and is no longer considered for the job.
- Considering discriminatory preferences. If an employer believes that others would object to a person being hired due to their membership in a group protected by the Code, it is not allowed to take this into account. For example, it would be discriminatory for a university to reject a candidate because of their age due to a belief that students prefer younger instructors.
Unless providing accommodation would impose undue hardship on the employer, it would constitute discrimination to reject a candidate because they would need accommodation if hired. Undue hardship is a high threshold to meet and requires a case-by-case analysis.
Special Programs
Section 42 of the Code states that it is not discrimination for an employer to plan, advertise, adopt, and implement an employment equity program (i.e., a program that prefers or limits hiring to persons with a certain protected characteristic) so long as the program: (a) is aimed at improving equity for individuals or groups who are disadvantaged because of race, colour, ancestry, place of origin, physical or mental disability, sex, sexual orientation, or gender identity or expression; and (b) achieves or is reasonably likely to achieve that objective. These programs are referred to as “special programs”.
If a special program has been approved by the employer and/or the Human Rights Commissioner, it would not be discrimination for the employer to publish a job posting that targets candidates with a certain protected characteristic, to ask candidates whether they possess that protected characteristic, or to refuse to employ candidates because they do not possess that protected characteristic.
Questions?
If you have any questions or concerns about discrimination in the hiring process, the Human Rights Office provides confidential and impartial advice, support, referrals, and information to students, faculty, and staff on all issues related to human rights. Contact us or visit our Get Help page for additional resources.
SFU managers and supervisors can also review the hiring resources prepared by Human Resources, which are linked here. | http://www.sfu.ca/humanrights/guides-and-protocols/discrimination-in-the-hiring-process.html |
4 Reasons To Update Your Screening Packages Now
To find the most comprehensive, efficient, and cost-effective solution for your background check needs, it’s important to regularly review and update your screening packages.
Resourcesarticles
Karen Axelton
11 min read
Background check adjudication standardizes the process of background check review so you can assess candidates’ background check results efficiently, fairly, and compliantly.
This article explains what background check adjudication is, how it works, how it benefits employers, and how to develop background check adjudication guidelines for your company.
Employee background checks are an essential part of the hiring process for most companies. However, a poorly implemented or noncompliant background check process is time-consuming and inefficient, slows time-to-hire and delays onboarding, and could even lead to costly fines, lawsuits, or settlements. That’s where background check adjudication can help.
By standardizing the background check review process, background adjudication makes reviewing and assessing candidates’ background check results faster, fairer, more efficient, more consistent, and more compliant.
This article will explain what background check adjudication is and how it works, how a consistent adjudication process can benefit employers, and how to develop background check adjudication guidelines for your company.
Adjudication is the process of evaluating the results of a job candidate’s background check against your company’s employment screening policy to help filter out candidates who may not meet your hiring guidelines.
By identifying candidates with relevant offenses, adjudication allows you to quickly filter out job applicants whose specific criminal histories disqualify them from the job.
For example, suppose a background check for a delivery driver job applicant shows she was recently convicted of driving under the influence. Depending on your company’s policy, that may prohibit her from being hired for the driver position. Adjudication allows you to spot this and filter her out of the list of qualified candidates.
Get peace of mind with built-in compliance features.talk to Sales
Developing a process for adjudicating background checks has several benefits for employers.
Background check adjudication can streamline hiring by enabling companies to employ consistent standards to identify unsuitable job candidates and focus on qualified applicants. This saves time and money, especially in large organizations that do high-volume hiring.
Some industries are regulated, and these regulations may dictate that certain criminal offenses are disqualifying for certain positions. These industries may include childcare, healthcare, and elderly care, or any organization that serves vulnerable populations. Establishing a background check adjudication process can help these companies comply with industry regulations.
The process of adjudication can also help ensure that employers apply their company hiring guidelines to all applicants equally. Guidance issued by the Equal Employment Opportunity Commission (EEOC) requires that job applications be screened consistently, with the same standards applied to every applicant for the same position.
By standardizing the way your company reviews background check results, adjudication helps to prevent hiring managers and other decision-makers from making one-off decisions that reflect their own individual biases, rather than company policy, which may reduce the likelihood of a discrimination claim.
Background adjudication is especially important for employers with locations in multiple states, as state background check laws vary in terms of what information can be reported on a background screening report. For example, fair hiring or “ban-the-box” laws in 36 states and over 150 cities and counties nationwide restrict employers from asking candidates about criminal history on a job application.
Ban-the-box and fair hiring laws may forbid employers from considering non-convictions or convictions not directly relevant to the job in question, or from using a conviction alone as a reason not to hire a candidate. Adjudication can help employers comply with these fair hiring rules. For example, employers can set their adjudication rules so that they do not consider convictions that are not relevant to the job.
Finally, background adjudication provides clear documentation of how decisions in the hiring process were made. Having this information on record can help defend your company against legal action.
Employers typically handle the adjudication process for background checks in one of two ways.
In the manual adjudication method, employers compare the results of background checks against their lists of hiring criteria. This often involves using a screening matrix or spreadsheet to sort candidates—disqualifying some, moving others along in the hiring process, and identifying those who need further investigation.
Manual background adjudication puts this process in the hands of employers, who, after all, are the best suited to decide which candidates meet their hiring standards. However, managing the entire process manually is time-consuming, eating into time better spent on higher-value activities. The lengthy process also delays onboarding, which could leave your business short-handed.
Manual adjudication also introduces a greater possibility of human error, compounded by states, counties and cities that may report the same criminal offense differently. Busy hiring managers may even forget to start the adverse action process for disqualified candidates.
Finally, unconscious bias may creep into the hiring process when manual background adjudication is used. At best, this could mean missing out on a potentially outstanding hire. At worst, it could mean a discriminatory hiring lawsuit.
Adjudication in the screening process is an important step, and while manual adjudication might seem like a good way to reduce hiring expenses, when you consider the labor and risks involved, it can actually be very costly.
Some third-party background screening providers offer adjudication as an additional service. A screening provider that automates much of the process can eliminate many of the challenges and weaknesses of manual adjudication.
Of course, you cannot completely automate your adjudication process, nor would you want to. Automated adjudication doesn’t free you from ultimate responsibility for whom you hire and reject.
Human oversight is still necessary to ensure accuracy and to assess the results that the automated process delivers. For example, some jurisdictions require a specific, individualized assessment of the background check be completed as part of the adverse action process.
However, automated adjudication can eliminate the need for tedious manual review to clear or flag job applicants according to your adjudication guidelines. This gives your HR team more time to spend on the individualized reviews needed to establish relevance and context for an offense.
A good automated adjudication solution lets you customize filters to fit your company’s needs and comply with state and local laws and industry regulations. This means you can focus on the information that matters most to your business.
Automated solutions can use the status and adjudication rules you select to set up automated workflows, streamlining the hiring process. For instance, GoodHire’s automated adjudication solutions will automatically send out pre-adverse action notices and final adverse action notices to keep your company in compliance, where such automation is allowed by law.
Setting filters and automating workflows can also help eliminate hiring bias, reducing the possibility that preconceived notions or human prejudices will affect your hiring process.
The specifics of your adjudication process will vary depending on your industry, location, and the positions for which you’re hiring. It’s important to set up this process correctly in order to ensure you are following the relevant laws and industry guidance.
The following best practices will help you design background check adjudication guidelines tailored for your business. Be sure to work with your business’s legal counsel when determining background check policies.
First, develop policies for categorizing different offenses or findings of the background screening. Which findings will eliminate a candidate from consideration for a given role?
If your business is in a highly regulated industry, such as childcare or health care, this decision may be made for you by law. For example, state and/or federal laws may prohibit a home health agency from hiring a caregiver whose background check shows he has abused or neglected patients.
In other situations, you have more flexibility to set your own parameters, but must take into account federal, state, and local laws. List possible findings for all the key areas your background check covers, such as the national sex offender list, national and state criminal records search, and motor vehicle records (MVR) search. Your filters should also consider factors such as the age and severity of the offense.
Set up filters for specific jobs reflecting what background check findings are critical to the job and which can be filtered out.
Determine what adjudication result is needed for candidates to move to the next stage of consideration. For example, you might categorize applicants this way:
Also decide what action you will take for each outcome. For instance:
Be careful when making employment decisions based on offenses that may be more common among certain demographic groups. If you automatically filter out candidates with a specific type of criminal conviction, for instance, you could be disadvantaging people of a certain race, color, national origin, sex, or religion. These are protected classes under Title VII of the Civil Rights Act of 1964, and a blanket ban on certain types of records that disparately impacts candidates in a particular protected class could trigger lawsuits.
Reviewing job candidates’ background screening results for potentially disqualifying offenses is key to building a team you can trust while keeping your business in compliance with state, local, and federal hiring laws. But conducting background check adjudication manually can be labor-intensive, prone to errors, and subject to human bias that can unintentionally result in unfair hiring.
An automated adjudication solution that can be customized to your company’s employment screening policy saves time and ensures you are following your policy. The result: A hiring process that’s faster, easier, and fairer to all applicants while protecting your business from litigation.
Using an accredited consumer reporting agency (CRA) such as GoodHire for your employment screening program can provide you peace of mind, while also improving screening program performance and efficiency. GoodHire’s Advanced Decisioning features include filtering and automated adjudication to help ensure you are applying background screening rules consistently, and keep you in compliance with company policies and best practices.
The resources provided here are for educational purposes only and do not constitute legal advice. We advise you to consult your own counsel if you have legal questions related to your specific practices and compliance with applicable laws.
To find the most comprehensive, efficient, and cost-effective solution for your background check needs, it’s important to regularly review and update your screening packages.
Ask your background screener these 3 questions to see if they’re putting you risk of making hiring decisions based on false-positive background check results.
What terms appear on a criminal background check, and how should employers read them? Learn how to read and understand them in a few simple steps. | https://www.goodhire.com/resources/articles/ultimate-guide-to-adjudication-and-background-checks/ |
Hiring and Employment Contracts
Both federal and state laws govern what an employer can do during the process of interviewing and selecting a new employee. In general, employers must avoid illegal discrimination during the process, follow rules related to hiring immigrants, follow child labor laws, refrain from making promises they cannot keep, and respect the privacy rights of the employee.
Anti-Discrimination Laws
Important federal anti-discrimination laws that affect the hiring process include the Civil Rights Act of 1964 (Title VII), the Age Discrimination in Employment Act of 1967, the Pregnancy Discrimination Act of 1978, the Immigration Reform and Control Act of 1986, and the Americans with Disabilities Act of 1990.
An employer cannot post a job advertisement that shows a preference for hiring on the basis of race, color, national origin, sex, disability, or genetic information. For example, it would be unlawful for an employer to post an ad stating “No blacks” or “No Middle Eastern candidates.” Similarly, an employer may not use such a preference when making a decision about whom to hire.
Discrimination may be inferred if an employer asks certain questions about a protected characteristic during an interview or in an application. For example, the Americans with Disabilities Act (ADA) prohibits an employer from requesting certain medical information or information about a disability during the hiring process. The focus must stay on whether you can do the job for which you applied, with or without a reasonable accommodation.
Privacy Laws
Prospective employees also have certain privacy rights. For example, the federal Fair Credit Reporting Act (FCRA) regulates the circumstances in which consumer credit reporting agencies may share the credit reports of consumers. Some states prohibit employers from making a hiring decision based on an applicant’s credit. If it is legal not to hire a prospective employee based on a credit report, the employer must inform you of that reason, give you a copy of the report, and notify you of rights under FCRA. Moreover, section 525 of the U.S. Bankruptcy Code prohibits discrimination on the basis of bankruptcy filing status.
Certain states also prohibit employers from making hiring decisions based on arrest or conviction, unless the criminal case substantially relates to the prospective employment. For example, if you were arrested for child abuse and applied to work at a daycare, the employer could reasonably deny you employment in a job that involved direct contact with children.
Each state has additional laws that must be followed in the hiring process. For example, in many states it is unlawful for a former employer to make disparaging untrue remarks to a potential employer when asked for a reference. A former employer who does this may be liable for defamation.
Written Employment Contracts
Written employment contracts are not required. However, many employers use them when hiring for a high-level or professional position. Most written employment contracts will describe the scope and duties of the job in addition to the salary and any other compensation or benefits.
In a written employment contract, there may also be a clause related to the job’s duration, your ability to compete with the employer during the job or upon termination, grounds for termination, a provision about trade secrets or client lists, an employer’s ownership of employee work product, and a method of dispute resolution related to the employment contract.
In general, written employment contracts are written to the benefit of the employer. You may be able to negotiate provisions of the contract if you are a highly skilled candidate. Your leverage may be limited depending on the employer’s evaluation of your unique abilities or market worth.
Certain provisions that are heavily slanted towards an employer may be found unconscionable or in violation of public policy, depending on the state. All employers that use written employment contracts hold a special obligation to deal fairly with you as an employee. This obligation is the “covenant of good faith and fair dealing.” An employer can be held responsible for breaching this duty. | https://www.justia.com/employment/hiring-employment-contracts/ |
10 Dec Background checks: an invasion of privacy or a legal responsibility?
The news is filled with controversies surrounding falsified resumes and the plethora of damages it creates for companies. And yet, there’s still the question – is a background check an invasion of privacy?
It goes without saying that people seeking employment have basic rights. But the reality is, employers also have the right to know who they are hiring.
At the core of the debate is what’s actually private information. Most findings from a background check – be it a criminal conviction, the record of a civil court case, or even a college degree – is technically public record. So, the question really is, how do you balance the employers right to know with the candidates right to privacy. It’s a delicate balance between the legal and ethical factors that come into play.
3 reasons to conduct background screening:
1. Fraud
Statistics show that more than half of the resumes out there contain falsifications. The most common are dates of employment, job titles, qualifications and skills. Background screening is a reliable way of verifying claims made by job seekers during the hiring process.
- Reputation and Financial Loss
Loss to a business can come in the form of intelligence theft, data loss, and physical theft, not to mention financial losses from paying recruitment fees, to onboarding and training, only to find out that the new hire is not competent enough to perform the role. The statistics aren’t great – research shows 43% of data breaches come from an insider threat. A brand isn’t built overnight. Some of the most iconic brands are decades in the making. To a rogue employee however your customer personal data is only a mouse click away.
- Safety
Employers face legal responsibilities for the safety and welfare of their employees, customers, vendors and visitors. If an employer hires someone who harms another employee, the employer may face claims for negligent hiring if there is reasonable cause to believe that the employee might have a history of violence.
Companies also have an ethical responsibility
Notwithstanding, companies have the responsibility to handle candidates’ private matters ethically. To conduct background screening legally and transparently, an employer must ensure the candidate is aware of the checks and the candidate must sign an authorisation form permitting the checks.
That being said, an employer’s rights to conduct a background check are not unlimited. There are laws and regulations that limit when and how employers can use background information in decision-making. Employers cannot use racial or ethnic background, political opinions, genetic history, age, gender, maternity status or sexual orientation to deny a job.
Companies also need to be consistent in how they conduct their background checks and practice restraint as to what information really impacts job performance. If a candidate of a certain race faces a criminal history check or an education verification, then all other candidates should be checked also. Variation from position to position is expected depending on the demands of each role, but there should not be variations between candidates vying for the same job.
How about Social Media?
Many companies are resorting to social media background checks in what is perceived as an opportunity to save time and money. In most cases, these checks are well-intentioned – no brand wants one of their employees sending out offensive posts that could damage their reputation – but intention can be ill-defined.
Some companies are trying to circumnavigate privacy settings by asking candidates for social media passwords or forcing candidates to log into their social media profiles during an interview. This is not advisable since you move away from the realm of publicly available information into the murky world of private accounts. There’s also the risk that with the information uncovered, the hiring manager won’t be able to make an unbiased decision as there are no guidelines available defining what is deemed acceptable or not and this could lead to employment discrimination.
Verification is also a concern. How sure can hiring managers be that they’ve found the correct social media profile – there might be a lot of people with a similar name or profile.
There’s also no consistency. Many accounts are non-active, and others have tight privacy settings. Someone who is very active on social media could have a disadvantage over another candidate who doesn’t have a profile.
Relying on social media for background checks is a risky path that gives rise to privacy concerns. Many companies are stumbling into a mess of legal and ethical implications. This can be avoided with more traditional background checks which are less legally or ethically treacherous.
The bottom line
Employers have valid rights to perform a background check when given the consent to do so. Employees are the most important investment a company makes – the wrong choice can cost irreplaceable time, money and safety. But practicing true balance goes a long way towards making the entire process as fair as possible – and that’s the goal we should all aim for. | http://rmi.com.sg/2018/12/10/background-checks-an-invasion-of-privacy-or-a-legal-reponsibility/ |
HR Advisory – Job Applicants and Social Media
If you’re ignoring social media, you’re denying an increasingly common recruitment tactic While there can be benefits to screening the internet for public information about a job candidate, be aware of the many risks associated with this new-age method.
How to use social media effectively for applicant screening:
- Create a social media screening policy
- Prepare screening questions for the position
- Conduct the screening
- Prepare and provide a report to the hiring manager
- Retain the documentation
How to help minimize risk through social media screenings:
- If you’re screening, screen everyone. Don’t just review one applicant or certain types of applicants.
- Remember a decision should be made based on skill and experience above everything else.
- Conduct in-person interviews first before social media screenings.
- Make sure the screener is removed from the hiring process to ensure unbiased research.
- If you use a third party to conduct screenings, be aware of laws, such as the Fair Credit Reporting Act, that may require you to get authorization, provide notices or make certain disclosures.
- Allow applicants to explain any information you’ve found online that you find damaging.
- Managers should be forbidden from considering protected classes when making an employment decision.
- Managers must keep notes to demonstrate a hiring decision was based on legitimate business reasons.
- Follow the laws relating to retaining records during the application process.
- Ensure your hiring manager (and anyone involved in the recruitment process) is aware of the law prohibiting employers from requesting applicants to disclose personal social media.
If you have any questions, feel free to reach out at any time. | https://www.owendunn.com/resources/another-resource/ |
Dear Chair Dhillon:
We write to request information about the Equal Employment Opportunity Commission (EEOC)'s oversight authority for hiring technologies. As businesses begin to re-open according to guidelines for the novel coronavirus 2019 (COVID-19) pandemic, some companies will seek to hire staff more quickly as many qualified people apply for open positions. Under these conditions, employers are likely to turn to technology to manage and screen large numbers of applicants to support a physically distant hiring process. Under Title VII of the Civil Rights Act of 1964 ("Title VII"), the Commission is responsible for combatting discrimination in the U.S. workforce, including discrimination resulting from hiring and other employment technologies.
Hiring technologies include a range of tools used in the employee selection process to manage and screen candidates after they apply for a job. They include new modes of assessment, such as gamified assessments or video interviews that use machine-learning models to evaluate candidates, as well as other instruments, such as general intelligence or personality tests, coupled within modern applicant tracking systems.
While hiring technologies can sometimes reduce the role of individual hiring managers' biases, they can also reproduce and deepen systemic patterns of discrimination reflected in today's workforce data. Today, Black and Latino workers are experiencing significantly higher unemployment rates than their white counterparts. The unemployment gap between Black and white workers is the highest it's been in five years.
Combatting systemic discrimination takes deliberate and proactive work from vendors, employers, and the Commission. Job applicants alone cannot effectively learn about and challenge discriminatory hiring processes. As the Commission acknowledged in its 2016 systemic program review, "[h]iring or nonselection remains one of the most difficult issues for workers to challenge in a private action, as an applicant is unlikely to know about the effect of hiring tests or assessments, or have the resources to challenge them."
The Commission is responsible for ensuring that hiring technologies do not act as "built-in headwinds for minority groups." Effective oversight of hiring technologies requires proactively investigating and auditing their effects on protected classes, enforcing against discriminatory hiring assessments or processes, and providing guidance for employers on designing and auditing equitable hiring processes.
Today, far too little is known about the design, use, and effects of hiring technologies. Job applicants and employers depend on the Commission to conduct robust research and oversight of the industry and provide appropriate guidance. It is essential that these hiring processes advance equity in hiring, rather than erect artificial and discriminatory barriers to employment. Accordingly, we request information about the Commission's authority and capacity to conduct the necessary research and oversight to ensure equitable hiring throughout the economic recovery and beyond.
Please provide answers to the following questions, including any underlying documentation in support of the responses:
Has the Commission ever used its authority to investigate and/or enforce against discrimination related to the use of hiring technologies? If so, please discuss the nature and results of such investigation or enforcement activity.
Under Section 705(g)(5) of Title VII, the Commission has the authority to "make such technical studies as are appropriate to effectuate the purposes and policies of this subchapter and to make the results of such studies available to the public." Can the Commission use this or any other authority to study and investigate the development and design, use, and impacts of hiring technologies absent an individual charge of discrimination? Please explain why or why not.
Could the Commission, for example, request access to hiring assessment tools, algorithms, and applicant data from employers or hiring assessment vendors and conduct tests to determine whether the assessment tools may produce disparate impacts? Please explain why or why not.
If the Commission were to conduct a study as described in question (1)(a), could the Commission publish or summarize its findings in a public report? Please explain why or why not.
What, if any, additional authority and resources would the Commission need to proactively study and investigate hiring assessment technologies?
The Commission periodically issues guidance and regulations, incorporating input from public meetings, discussion, and comments. The Commission has held several meetings on the implications of data and digital technologies on equal employment opportunity, including a meeting on October 13, 2016, on the "use of big data" in equal employment opportunity.
Has the Commission followed up on these meetings by providing any guidance, releasing publications, or conducting additional research on the use of data and technology in hiring? If so, please explain and provide documentation of such follow-up.
Does the Commission have plans to conduct any additional follow-up or release additional guidance or publications on the use of data and technology in hiring?
Thank you for your time and consideration.
Sincerely, | https://justfacts.votesmart.org/public-statement/1494674/letter-to-the-hon-janet-dhillon-chair-of-the-equal-employment-opportunity-commission-bennet-colleagues-call-on-eeoc-to-clarify-authority-to-investigate-bias-in-ai-driven-hiring-technologies |
There is much much more to doing a background check than just using Google. Those individuals or companies that are conducting such online background checks, could actually be violating B.C.’s Personal Information and Protection of Privacy Act, BC Security Services Act and the B.C. Human Rights Code.
"Recognize that any information collected about individuals is personal information or employee personal information and is subject to privacy laws, whether or not the information is publicly available online or whether it is online but subject to limited access as a result of privacy settings or other restrictions.” OIPC May 2017
Our digital background checks are designed to help employers screen prospective employees, volunteers and candidates during the hiring process, thus reducing potential hiring risks. Our comprehensive online research mitigates hiring risks by collecting available and relevant personal information about a subject via open sources located on the Internet. This information can then be used to validate a candidate’s education and employment history provided in a resume, or even to help validate answers that a candidate provides during an initial interview.
When conducting our Digital Background Checks, we ensure that we comply with "Guidelines for Social Media Background Checks”published by the Office of the Information and Privacy Commisioner for British Columbia in May 2017:
What we do is more than just a basic Google search, our team use their skills as Advanced Open Source Intelligence Police Investigators to locate relevant information about hiring candidates, that may be “publicly” found in both the light and deep web, something that a plain Google search alone can’t do. | https://www.thewhitehatter.ca/internet-background-checks |
In the last month, employers have been top of mind for the courts, including the Supreme Court. The legislatures have been no different, focused on bringing mandates such as increased minimum wage and the nuances of the FCRA into law.
But, which cases do you need to know about? How will the outcomes affect your business? Five decisions that include FCRA and EEOC legislation, as well as decisions affecting employee privacy, are need to know for HR professionals and employers.
We’ll keep you updated as these issues progress.
[Tweet “5 decisions about FCRA and EEOC legislation and employee privacy are need to know”]
FCRA
Employers can check applicants’ LinkedIn references without violating the FCRA
The legal viability of LinkedIn was under scrutiny in the courts, but it has been decided that employers are free to continue using LinkedIn to check applicants’ references with or without their knowledge. [Tweet “Employers continue using LinkedIn w/ or w/o applicant knowledge.”]LinkedIn is not qualified as a credit reporting agency (CRA), and as such has avoided many of the reporting requirements that would imply.
Employee Privacy
New Connecticut law prohibits employer access to employees’ personal online accounts
Email, social media, and online retail accounts are now protected by law. In the past, employers were able to ask for applicants’ account information and passwords as a part of the the screening process. Now in Connecticut, employee privacy has been codified, and social media screening that accesses non-public information, is no longer legal.
DOL seeks information about employees’ use of smartphones
At this point, no laws or regulations are proposed by the DOL; however, they are investigating how employees use electronics, specifically smartphones with email capabilities, outside of working hours.
EEOC
EEOC, court flip flops reveal challenges to employers facing accommodation requests
The court has recently decided several cases that seem to be contradictory in the matter of employee accommodation. In the past, charts or “cheat sheets” may have helped employers decide how to handle accommodation requests, but the recent court decisions make it necessary to evaluate each case individually. The Americans with Disabilities Act (ADA) is a complicated proposition, and implementing it requires creativity and flexibility from HR professionals and employers.
Wait, I thought we couldn’t ask about religion in hiring? The impact of the Supreme Court’s ruling in EEOC v. Abercrombie & Fitch
In the Supreme Court Decision, Justice Scalia writes, “An employer may not make an applicant’s religious practice, confirmed or otherwise, a factor in employment decisions.”
The decision was made June 1, and the court sided with the EEOC against Abercrombie & Fitch. Much has been written about this case, but the bottom line is that religion and the practices that come with it cannot be a part of the hiring decision. | https://kressinc.com/fcra-eeoc-and-employee-privacy-a-legal-update/ |
What is the scope of current employment law in the U.S.?
Employment law covers all rights and obligations within the employer-employee relationship — between employers and current employees, job applicants, or former employees. Because of the complexity of employment relationships and the wide variety of situations that can arise, employment law involves legal issues as diverse as discrimination, wrongful termination, wages and taxation, and workplace safety. Many of these issues are governed by applicable federal and state law. But, where the employment relationship is based on a valid contract entered into by the employer and the employee, state contract law alone may dictate the rights and duties of the parties.
What are is the principle of employee rights in the workplace?
All employees have basic rights in the workplace — including the right to privacy, fair compensation, and freedom from discrimination. A job applicant also has certain rights even prior to being hired as an employee. Those rights include the right to be free from discrimination based on age, gender, race, national origin, or religion during the hiring process.
Important employee rights include:
- Right to privacy (may be limited where e-mail and Internet use is concerned)
- Right to be free from discrimination and harassment of all types;
- Right to a safe workplace free of dangerous conditions, toxic substances, and other potential safety hazards;
- Right to be free from retaliation for filing a claim or complaint against an employer (these are sometimes called “whistleblower” rights);
- Right to fair wages for work performed.
What is the principle of employer responsibility in the workplace?
Employers have an obligation to follow federal and state employment and labor laws — including those pertaining to discrimination, fair pay, employee privacy, and safety in the workplace. The employer’s legal obligations do not only pertain to hired employees, but extend to job applicants as well. For example, a prospective employer cannot ask a job applicant certain family-related questions during the hiring process.
What are the Federal regulations related to employment relationships?
Following is a quick summary of key federal laws related to employment.
Title VII of the Civil Rights Act of 1964
- Applies only to employers with 15 or more employees.
- Prohibits employers from discriminating in the hiring process based on race, color, religion, sex, or national origin.
Americans With Disabilities Act (ADA)
- Defines a disability as a physical or mental impairment that substantially limits one or more major life activities.
- Prohibits discrimination against a person with a qualified disability.
- Provides that if an individual with a disability can perform essential functions with or without reasonable accommodation, that person cannot be discriminated against on the basis of their disability.
Age Discrimination in Employment Act
- Prevents employers from giving preferential treatment to younger workers to the detriment of older workers.
- Only applies to workers 40 years of age and older, and to workplaces with 20 or more employees.
- Does not prevent an employer from favoring older employees over younger employees.
Fair Labor Standards Act
- Provides regulation as to the duration of work days, and breaks an employer must provide.
- Governs applicable salary and overtime requirements set out by the federal government.
Family and Medical Leave Act
- Provides that employers must allow employees to take up to a 12-week leave of absence for qualified medical purposes.
- Stipulates that to qualify for the leave, the employee must have worked for the employer for 12 months and for 1,250 hours in the 12 months preceding the leave.
- Preserves qualified employees’ positions for the duration of the leave.
Where can I get legal help with an employment law issue?
Employers have a variety of legal obligations in the workplace, established under both federal and state law. If you and/or your business are faced with a potential legal dispute with an employee, or if you need assistance with any employment law issue, it may be in your best interests to talk to an experienced employment law attorney who will explain your options and protect your legal rights. | https://www.audaxhr.com/faq/employment-law-101/ |
Message from the Editor:
Welcome to another edition of ‘Inside Background Screening’ our new newsletter. Our goal is to bring to you cutting edge news and information about what is happening in the background screening world to help keep you informed and to position you to make the best possible hiring decisions.
We hope you enjoy ‘Inside Background Screening’ and that you will share your interest and thoughts with us.
Lorenzo Pugliano
CEO
[email protected]
EMPLOYMENT SCREENING NEWS
Another Busy Year for Employment Purposed Background Checks: What Happened in 2021?
With businesses remaining the target in Fair Credit Reporting Act (FCRA) class action lawsuits, employers should evaluate the background check process to ensure compliance with the FCRA, similar state fair credit reporting statutes and substantive employment laws. In addition, employers should consider a privileged review of their background screening practices to ensure compliance with ban-the- box and other laws impacting background screening. In Illinois, for example, an amendment to the Illinois Human Rights Act makes it more difficult for employers to reject applicants or terminate employees based on their conviction history, and Louisiana’s “Fair Chance” law prohibits employers from considering an arrest record or a charge that did not result in a conviction if the information was “received in the course of a background check.”
Federal Contractor Obligations Under Fair Chance Act
Federal contractors now must comply with the federal Fair Chance Act (FCA), which prohibits contractors from inquiring about a job applicant’s criminal background in certain cases in the initial stages of the application process. The FCA covers civilian agency contracts and defense contracts. It is not clearly defined how to determine whether a position is to perform work “related to” work under the federal contract, but once a contract has determined such, it must next determine whether the position falls into any of the three categories of positions exempted from the FCA.
New BLS Data Show Major Hazards Causing Occupational Fatalities in 2020
There were 4,764 fatal work injuries recorded in the United States in 2020, a 10.7% decrease from 5,333 in 2019. Of course, the number of fatalities is an absolute figure; while the working population continues to grow, the fatality rates continue to decline.
The largest workplace killer, in absolute terms, was “Transportation Incidents,” with 1,778 fatalities in 2020. Intentional workplace violence such as homicides and suicides resulted in 651 fatalities, and exposure to harmful substances or environments led to 672 worker fatalities in 2020, the highest figure since the series began in 2011, according to the BLS. Within this category, unintentional overdose from non-medical use of drugs accounted for 57.7 percent of fatalities (388 deaths), up from 48.8 percent in 2019. While neither workplace violence nor overdosing from drugs and alcohol are regulated by OSHA standards, employers should develop programs to address these very real hazards.
DRUG SCREENING ISSUES
Marijuana Laws Impacting Employers Spread Like a Weed in 2021: A Year in Review
Marijuana and drug testing policies continue to need review into the new year after a year of significant changes across the country. Connecticut legalized recreational marijuana use by adults 21 years and older, prohibiting employers from taking certain actions in the absence of clear policies and in New Mexico, recreational marijuana was legalized, but the law does not provide employment protections and expressly affords several protections to employers. Employers in New Jersey are largely prohibited from rejecting a job applicant who tests positive for marijuana and in New York, employers are prohibited from taking any action against someone for using recreational marijuana when not working. Philadelphia employers are now prohibited from requiring prospective employees to undergo testing for the presence of marijuana as a condition of employment, and Virginia has addressed both recreational marijuana and cannabis oil.
Change in the Wind: Time for Employers to Review Their 2022 Workplace Drug Testing Polices
Currently, 18 states and D.C. have fully legalized marijuana for recreational purposes, including Connecticut, New Mexico, New Jersey, New York, and Virginia, all of which legalized the use of marijuana for recreational purposes in the last year. In addition, 36 states have legalized marijuana use for medicinal purposes.
Some of these states prohibit employers from taking adverse employment actions against employees for legal off-duty marijuana use, while other states are considering creating or amending marijuana legalization laws to either include employment protections or expand the coverage of existing laws.
In the wake of these laws, many employers are considering removing marijuana from the panel of drugs tested for in their employment policies, at least in the absence of reasonable suspicion that the employee is using or impaired by marijuana on the job. It is important to note in this regard that marijuana can be detected in an individual’s system up to 30 days after use, so a positive marijuana test does not necessarily mean that the individual currently is impaired.
LEGAL ISSUES
Amazon, Whole Foods Can Be Sued By Convicted Murderer Rejected For Delivery Job
In a Wednesday night decision, U.S. District Judge Valerie Caproni said Henry Franklin, convicted murderer, could pursue a proposed class action after being turned down for a grocery delivery job at Cornucopia Logistics, which serves Amazon and Whole Foods.
Amazon determined after a background check that Franklin had lied on his April 2019 job application by answering “no” when asked if he had a criminal record.
New York law bars employers from rejecting job applicants based on their criminal histories unless the crimes relate directly to the jobs sought, or hirings would pose an unreasonable risk to the public. Without ruling on the merits, Caproni said the defendants failed to show that either exception applied, adding that Franklin “has adequately alleged that he is rehabilitated and no longer poses a threat to the public. “She also said she was “sympathetic to defendants’ likely position that they do not want a convicted murderer delivering groceries to their customers’ homes.”
DATA PROTECTION & PRIVACY
Best Practices for the Virginia Consumer Data Protection Act
There are plenty of trends and guidance when it comes to consumer data privacy, and it is critical for employers to become familiar with those that affect their business. The Virginia Consumer Data Protection Act (VCDA) Working Group on the Joint Commission on Technology and Science released a report on best practices and recommendations, identifying 167 points of emphasis around the VCDA. The Federal Trade Commission (FTC) issued a new enforcement policy statement warning companies against deploying illegal dark patterns that trick or trap consumers into subscription services and the director of the Consumer Financial Protection Bureau (CFPB) released an advisory opinion to address false identity matching. The topic is also making headlines. Washington has set a new record for the number of data breaches and ransomware attacks and in New York, Senate Bill S2628 was signed for an Act that requires employers who engage in employee electronic monitoring to provide notice to employees.
Four More Consumer Data Privacy Bills Introduced in US
The topic of consumer data privacy bills is trending, and lawmakers introduced new bills in Florida, Washington, Indiana, and the District of Columbia. Similar to the Colorado and Virginia laws, Washington state’s Washington Foundational Data Privacy Act (HB 1850) contains an annual registration requirement. Indiana’s bill appears to borrow concepts from the CPRA and Colorado/Virginia models, while the District of Columbia’s bill is based on the Uniform Personal Data Protection Act drafted by the ULC.
BIOMETRICS
Beware of Hidden Pitfalls: Biometric Privacy Guidance for California Employers
California employers who operate under the assumption that there are no applicable legal requirements that must be satisfied when using biometrics in their day-to-day operations are mistaken and California Labor Code § 1051 demonstrates otherwise. The California Consumer Privacy Act of 2018 (CCPA) and its soon-to-be successor, the California Privacy Rights Act of 2020 (CPRA), can result in criminal penalties for noncompliance. Labor Code § 1051 bars employers that require employees or job applicants to furnish their fingerprints from disclosing that fingerprint biometric data to any third party. Employers in the state should first ensure that their biometrics service providers and vendors are completely precluded from accessing any fingerprint data collected by the employer through the service provider/vendor’s technology and maintain robust policies and protocols to prevent inadvertent disclosures of employee fingerprint data to any third parties. Employers also must maintain robust security measures to safeguard employee fingerprint data.
Practical Guidance for Minimizing FTC Liability Exposure When Using Facial Biometrics
Employers who continue to operate under the assumption that there is no need to maintain any type of biometric privacy compliance program when using facial recognition software are extremely vulnerable to liability exposure by the Federal Trade Commission (FTC). Companies should take the steps now, even if they are not governed by any biometric privacy regulation, to build a privacy compliance program. The FTC is becoming increasingly aggressive when it comes to policing the misuse of facial recognition. The FTC guidance, “Facing the Facts: Best Practices for Common Uses of Facial Recognition Technologies,” draws upon three core privacy and security principles: privacy-by-design, simplified consumer choice, and transparency.
MOTOR VEHICLE RECORDS
The Top 4 FMCSA Violations of 2021
Businesses should do themselves a favor going into the new year by always being ready for an audit by the Department of Transportation. The results of the past year’s violation data indicate that of every 100 carriers who get audited, less than 5 pass without a violation. The top 4 violations of 2021 include: 1. Allowing a driver to operate with a suspected/revoked CDL, 2. and 3. Failing to implement an alcohol and/or drug testing program (or random testing program), and 4. Allowing a driver with more than one CDL to drive a CMV. Other areas of noncompliance include using an unqualified driver, employing a driver who was disqualified from holding a CDL, and not keeping inquiries into the driver’s employment record in the driver qualification file.
E-VERIFY & IMMIGRATION STATUS
What’s in a Name? Resolving Form I-9 Document Discrepancies
While the idea of writing one’s last name on a form seems like an easy task, complications on an I-9 form may arise when names appear differently on various official documents. Rules relating to last name include: entering the full “legal” last name; including both names when an employee has two last names or a hyphenated last name; including the name in the last name field and “unknown” in the first name field when the employee has only one name; avoiding writing periods; and excluding name suffixes. Employees are instructed to include their full “legal” first name, as well as both names when two names exist. The middle initial is defined as “the first letter of your second given name, or the first letter of your middle name, if any.” The “other last names used” field should be considered in its former label: “maiden name,” changed to avoid the possibility of discrimination and to protect the privacy of transgender and protected individuals.
Completing the Form I-9 in 2022? Here’s what you need to know.
With roughly 11 million job openings and 7 million people looking for work in the US, human resource professionals can look forward to another frenetic hiring season in 2022, filled with the typical (and predictable) onboarding challenges and demands along with the added complication of a global pandemic that refuses to go away.
And when it comes to hiring challenges and conundrums, none is more acute than completing the error-prone and time sensitive Form I-9 employment eligibility verification process. During the past three years, the government has implemented a significant number of new I-9 and E-Verify related policies and procedures, due in large part to COVID-19 and its varying effects on both the hiring process and the underlying documents needed to prove identity and work authorization.
At the same time, we’ve also witnessed several new immigration-related policies from the Biden administration that in some cases, make meaningful changes to how employers verify work authorization of their newly hired employees.
If you’re new to HR or would simply like a refresher, read below for my “top 10” things to know for I-9 compliance in 2022! | https://nsshire.com/february-newsletter-2022/ |
Social media has come to play an increasingly important role in how businesses operate. Because social media sites allow users to share information, ideas, personal messages, and other content faster than ever before, employers have sought to harness the power of social media to remain relevant in the global marketplace. In addition, a majority of employers report using social media to screen potential candidates during the hiring process.
Not content with acquiring just publicly available social media information, some employers are even requiring applicants to provide personal login information for Facebook and other social media sites. In some cases, employers for sensitive positions seek to insulate themselves from later complaints that they overlooked a red flag. In other cases, the employers are simply zealous investigators. The response from state and federal legislators, however, has been to propose new laws that would definitively ban the practice.
Recent Reports of Employers Requesting Social Media Passwords
In March 2012, the Associated Press reported a story about a New York City statistician being asked for his Facebook username and password during an interview because the employer wanted to access his private profile.1 This story, along with a number of others like it, has sparked public outcry from privacy groups including the American Civil Liberties Union claiming that such requests are an invasion of privacy. These groups worry that although the statistician chose to withdraw his application because he did not want to work for a company that would seek such personal information, other prospective job candidates who confront the same request may not be able to afford to refuse.
Facebook has issued a statement emphasizing that its terms of service forbid “anyone from soliciting the login information or accessing an account belonging to someone else.”2 Additionally, Facebook’s Chief Privacy Officer has said that the practice of asking an applicant for his or her login information “undermines the privacy expectations and the security of both the user and the user’s friends. It also potentially exposes the employer who seeks this access to unanticipated legal liability.”3 The US Department of Justice has stated, however, that although it considers entering a social networking site in violation of the terms of service to be a federal crime, it would not prosecute such violations.4
Legislative Response
Reports of employers requesting social media passwords from job applicants have also drawn the attention of state and federal legislators. Notably, on March 26, 2012, US Senators Charles Schumer and Richard Blumenthal asked the US Department of Justice and the US Equal Employment Opportunity Commission (EEOC) to investigate whether the practice violates federal law while they draft legislation “that would fill in any gaps in federal law that allows employers to require personal login information from prospective employees to be considered for a job.” Although a similar provision was recently voted down as part of an amendment to the Federal Communications Commission reform package in the US House of Representatives, Senator Blumenthal’s office is currently drafting a bill to present to the US Senate. As the Senators emphasized in their letter to the EEOC, one of their primary concerns is that employers, by obtaining social media login information from applicants, could access private and protected personal information “under the guise of a background check [that] may simply be a pretext for discrimination.”
In April 2012, Maryland became the first state to pass a bill, the User Name and Password Privacy and Protection Act,5 prohibiting employers from asking employees or job applicants for social media login information. If the bill is signed into law as expected, it will take effect on October 1, 2012.6 Once in effect, Maryland employers will be barred from requesting or requiring that an employee or job applicant disclose login information for “any personal account or service” accessed through “computers, telephones, personal digital assistants, and other similar devices.”
Although targeted at social media, the bill’s prohibitions also include personal e-mail accounts, credit card accounts, and online banking accounts, among others. Additionally, the bill prohibits employers from taking or threatening any form of adverse action based on an employee’s or applicant’s refusal to provide a user name or password to a personal account accessed through a communications device. Notably, however, the bill does not authorize applicants or employees to sue employers who violate the Act, nor does it provide for civil or criminal penalties. However, an employee who is terminated in violation of the Act could presumably have a claim for wrongful discharge in violation of public policy. Furthermore, the bill contains specific exceptions which permit employers to require employees to disclose log-in credentials “for accessing nonpersonal accounts or services that provide access to the employer’s internal computer or information systems,” and while performing an investigation to ensure compliance with financial and securities laws.
Although no other state has passed a bill banning employers from asking for social media login information, similar legislation has been proposed, or is currently being drafted, in several states including California, Illinois, Michigan, Minnesota, New Jersey, New York, and Washington.7 Like their federal counterparts, the state legislators proposing these bills argue that asking employees or job applicants for social media passwords is a back-door attempt to learn about protected personal information and some argue that penalties are needed to discourage the practice. For example, the proposed Michigan bill provides for civil and criminal penalties against the employer and also permits an aggrieved party to bring a civil action to recover damages of at least $1,000.00 and reasonable attorneys’ fees and costs.8 The proposed New York bill provides for similar relief.9
Other Legal Considerations in Using Social Media to Review Applicants
Employers that use social media to screen applicants also need to be aware of the legal risks associated with the practice, whether or not the employer requests and uses the login information to view private profiles. One such risk is a potential claim that a decision not to hire an applicant was unlawful discrimination or retaliation for activity that is protected by law. Existing federal laws, including Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act, and the Age Discrimination in Employment Act prohibit employers from basing employment decisions on factors such as age, race, national origin, religion, and marital status. Many state laws protect these classes, as well as sexual orientation and gender status, and some provide statutory or common law privacy protections and protection for legal off-duty activities.
Finally, both state and federal laws protect employees from a refusal to hire due to protected complaints or lawsuits, workers’ compensation claims, arrests, and whistleblower activity. Because this type of information is often mixed in with other information readily available on social media sites, using these sites to screen candidates poses a risk that the employer will obtain information about protected class status or protected activity. If the protected information reaches the decision maker, directly or indirectly, an employee may use that fact to challenge a hiring decision in a subsequent state or federal law action.
Employers should also be cognizant that using social media to screen applicants may also implicate the Stored Communications Act, which generally prohibits intentional access to electronic information without authorization. While it remains an open question, an employer that accesses an applicant’s social media profile with the individual’s personal username and password may be doing so without authorization. For more information on the Stored Communications Act and other legal issues associated with social media, please see the second edition of Mayer Brown’s The Social Media Revolution: A Legal Handbook.10
Given the recent national focus and proposed legislation seeking to prohibit employers from asking job applicants for social media login information during the screening process, this is a rapidly developing area to which employers should pay close attention. Although Maryland is the first state to pass a bill prohibiting employers from requesting social media login information, other states appear to be close behind. Additionally, employers should be aware—even if the practice is not expressly outlawed in their jurisdiction—that accessing a public or private social media profile page carries the risk of rejected applicants claiming they were rejected for an illegitimate reason such as their race, age, or marital status. | https://www.lexology.com/library/detail.aspx?g=12f9f636-d679-49fc-b47f-be61d510e954 |
Besides the usual background check, a criminal background check, other things can appear on a background check. These include education verifications and employment history. Other items that may appear on a background check include recent places of residence and social media checks.
Criminal records
Obtaining a criminal records check is an essential part of the hiring process. It helps employers find the best candidates and ensure the safety of their employees and customers. But first, it’s necessary to understand what’s included in a criminal background check before undergoing one. Criminal history checks can uncover several essential pieces of information, including arrests and pending charges. However, it’s important to remember that the information contained in a background search is governed by local, state, and federal laws. In addition to criminal records, employers who conduct a background check can also access other types of information. These include motor vehicle records, which provide an employer with a candidate’s driving history. Credit checks also reveal candidate spending habits and financial responsibility. If an employer knows about a candidate’s criminal history, it could lead to negligent hiring claims. Therefore, employers are required to follow pre-employment background check regulations. These rules vary by state, but the federal Fair Credit Reporting Act prohibits employers from seeing bankruptcies and accounts placed for collection after seven years. Some states prohibit the reporting of felony convictions more than seven years old. In some jurisdictions, defendants can ask for their records to be sealed. In addition, some jurisdictions permit the expungement of juvenile records.
Employment history
Employers may ask for information on the applicant’s employment history during the hiring process. Employers take this legitimate step to ensure they hire trustworthy candidates. However, lying about an employment history is not a good idea. It can undermine trust. During the hiring process, prospective employers may inquire about applicants’ education and financial history. They also may ask about the applicant’s social media usage, including if the applicant has a criminal record. The EEOC (Equal Employment Opportunity Commission) has information on how to handle these types of inquiries. The agency can also investigate discrimination based on sex, race, age, and genetic information. However, these inquiries are usually subjective and only sometimes tell employers what is in the best interest of the employer. A work history report will detail the applicant’s employment history, including the names of the supervisors and start and end dates. The information will also show whether the applicant has been promoted. A work history report will also show whether the applicant has performed well at previous jobs. This will be especially important to employers who hire individuals who work with vulnerable populations.
Education verifications
During the hiring process, most employers look at several factors. One of them is education. Education verification ensures the candidate is the right person for the job. It can also help employers make better hiring decisions. Some positions require higher-level education than others. For example, some jobs require a graduate or doctoral degree, while others only require an entry-level degree. A higher-level degree can help an employer earn a higher salary. Education verification can also help employers identify false claims made by candidates. For example, a candidate may claim to have earned a degree at a university in a different country than the one they attended. However, this can lead to inaccurate verification results. While an education verification may take a few days, it can be time-consuming. For example, if a candidate claims to have earned a degree in college, it can take weeks for the school to post the record. In addition, if the documents are older, they may not be accessible electronically. A background check service can help employers quickly get a comprehensive education verification. Unfortunately, using a background check service can also cost an employer extra money. However, spending more money on a background check is better than hiring an unqualified employee.
Recent places of residence
Whether you are checking out a new candidate or vetting new tenants, getting the right person for the job is essential. Having a background check done on a prospective employee will give you peace of mind. There are several benefits to having a background check done on your employees, from ensuring they are a good fit for your organization to ensuring they are not a security risk. Besides, no cost is involved, and the results are typically instantaneous. Finally, a background check on your employees will ensure that the good ones stick around. A background check is the best way to ensure that you are hiring the best people for the job.
Social media checks
Adding a social media check to a background check can help determine whether the candidate’s interests match your company’s goals. But it’s essential to do it right, as social media can carry legal risks. The Fair Credit Reporting Act (FCRA) requires that companies performing background checks follow specific guidelines. A social media check can be carried out on a candidate before or after a job interview. However, it should be one of many background checks you conduct. You will be better off doing it as part of a complete background screening process. A social media check can help you discover hidden traits, like political leanings, religion, or erratic behavior. It can also reveal red flags. This information can help you meet your company’s discrimination laws. Unlike a traditional background check, a social media check is a real-time, genuine person looking at the person behind the account. This means that it’s more likely to reveal information about the candidate’s true personality. | https://techtablepro.com/what-things-appear-on-a-background-check/ |
This trend only accelerated during the pandemic, yet many businesses do not know enough about these tools and their potential flaws, raising the risk that they may be failing to comply with their existing and evolving legal obligations. The technical assistance document recently published by the U.S. Equal Employment Opportunity Commission (EEOC), and the New York City ordinance on AI in hiring that will become effective Jan. 1, 2023, are some of the early signs that lawmakers are increasingly paying attention to the use of these tools in employment. With diversity, equity and inclusion top of mind for many businesses, employees and investors, it is critical for employers to understand the tools they are using. “From a DEI perspective, it’s imperative that we consider the role AI plays in recruiting and retention,” said Armstrong Teasdale’s Vice President, Diversity, Equity and Inclusion Sonji Young. “This is not a concept that is unfamiliar, or going away, and as organizations look to grow, the management of candidates and the pipeline must be carefully and appropriately vetted.” Moreover, employers need to be aware of regulatory and legislative moves in these areas, so they can take steps to prevent AI from introducing bias into hiring decisions.
What is an AI hiring tool?
Generally, AI refers to “the capability of a machine to imitate intelligent human behavior,” such as decision-making. In the employment context, federal and state authorities have been defining the term broadly as systems such as machine learning, computer vision, intelligent decision support, and other computational processes used to assist or replace human decision making in the hiring process.
Scholars have cautioned that machine learning and other algorithmic tools are predicated on a fundamental flaw: AI learns from pre-existing behavior, which itself may be faulty. AI systems trained on biased data from existing workplaces may be perpetuating the same imbalances or creating new ones, and may be doing so in violation of applicable law, by recreating employee populations with insufficient numbers of women, people of color, those with disabilities, and other marginalized groups.
As Armstrong Teasdale Chief Human Resources Officer Julie Paul has observed, “AI cannot be thoughtful about considering candidates who might not fall within bright-line criteria, yet may still be qualified. Those candidates will never be seen, because they will be screened out by the platform, and this creates not only missed opportunities, but liability for employers and hiring managers.”
State and local lawmakers and federal regulators are currently wading into this area to remind employers about their existing legal obligations regarding fair hiring, and increasingly to impose new obligations specific to the technologies themselves.
The EEOC’s Technical Assistance Document
Last fall, the EEOC launched the Algorithmic Fairness Initiative to ensure that employers using AI in employment decisions comply with federal civil rights laws that the agency enforces. One result of this initiative is the EEOC’s May 12, 2022Technical Assistance Document (TAD), which addresses how Americans with Disabilities Act (ADA) requirements may apply to the use of AI in employment matters. The TAD notes that while vendors creating AI tools may vet them for race, ethnicity and gender bias, these techniques may not address employers’ obligations not to discriminate against individuals with disabilities. The EEOC cautions that “[i]f an employer or vendor were to try to reduce disability bias in the way” they do for other protected categories, this “would not mean that the algorithmic decision-making tool could never screen out an individual with a disability” because “[e]ach disability is unique.”
The TAD also provides recommendations on how employers can comply with the ADA, and addresses applicants who believe their rights may have been violated. In a noteworthy move, the EEOC lists—but does not mandate—various “promising practices” that employers could adopt to combat disability bias, such as asking the vendor of the algorithmic decision-making tool about its development, including whether the tool is attentive to applicants with disabilities.
Vetting for bias on the basis of disability promises to be a complex process, and one in which vendors may not be prepared to invest. While it remains unclear what weight, if any, these recommendations will be accorded by the courts or even the EEOC in the future, they provide key insights into the agency’s current views on employers’ expected conduct around AI use.
New Law for New York City Employers
New York City has taken a more proactive approach: starting Jan. 1, 2023, every business with employees in the city will be prohibited from using any computational processes that substantially assist or replace discretionary employment decision making (which the ordinance refers to as automated employment decision tools (AEDTs)) to screen employees or candidates for employment or promotion, unless the tool has undergone an independent bias audit no more than one year prior to its use and the employer has posted the results online. An acceptable independent “bias audit” includes testing of the AEDTs to assess potential disparate impact on persons based on race, ethnicity, or sex. The law does not specify who qualifies as an “independent auditor,” but presumably it would not include an in-house expert or the vendor who created the assessment. Notably, the statute imposes penalties of $500 to $1,500 per day that the tool is in use in violation of the law.
Given the roughly 200,000 businesses operating in New York City, this law is poised to have a significant impact, yet despite the availability of recently published proposed regulations, its broad scope leaves many open questions. It also remains unclear whether long-standing computer-based analyses derived from traditional testing validation strategies are covered by the law, or whether passive evaluation tools, such as recommendation engines used by employment firms, could fall within the scope of the law.
Looking Ahead
Businesses will no doubt continue to feel pressure to use AI and other tools to process employment applications efficiently, and other states and localities are likely to issue their own laws and regulations. In this complicated and evolving landscape, employers should proceed with caution to avoid potentially violating both existing anti-discrimination obligations and new rules targeted at these tools. | https://www.armstrongteasdale.com/thought-leadership/artificial-intelligence-in-hiring-a-double-edged-sword/ |
Preparing for the hiring process
Taking the time to carefully plan the hiring process is important and ensures that you hire an employee with the right mix of skills and characteristics for the job.
On this page
Planning the hiring process
Most employers recognise the fact that employees are their greatest asset, and the right recruitment and induction processes are vital in ensuring that a new employee becomes effective in the shortest possible time. A mistake during the planning can be costly and could damage the future employment relationship.
You should plan to make sure:
- you have a clear idea of all the costs of hiring someone;
- you follow a clear, consistent employment process;
- you have identified the real requirements and skills needed for the job and have clearly communicated these to all job applicants;
- the privacy and confidentiality of applicants is maintained;
- advertising, selection and hiring decisions are made fairly and not on unlawful grounds;
- communications with applicants are clear, with no outstanding areas of uncertainly; and
- there is an induction process giving the employee a fair chance of reaching the expected standard of performance.
For more information, see the links in the ‘External Links’ section and the Labour Relations Agency advisory guide on the ‘Related tools and publications’ section. | https://www.lra.org.uk/starting-out/hiring/preparing-hiring-process |
Written By: Surayya Taranum, Ph.D.
I was about to finish my postdoc soon, my project was nearly complete.
I had submitted my research manuscript for publication; it was accepted after some revisions.
I was going to have a great closure to my academic career, something from which a lot of PhDs are deprived.
I was ready to move on in my career, and explore new, exciting territories.
In short, I was seriously considering transition into industry.
I did a lot of research to find out what options were available to me in the industry.
I figured out what industry roles interested me the most, and prepared my resume to target those positions.
I was ready.
At least, that is what I thought.
To make my transition efforts successful, I attended industry events and networked with industry professionals.
I got several informational interviews.
People were generous with their time and knowledge, and willing to share information with me.
I even got a few referrals. This was exciting!
I prepared thoroughly for all sorts of interview questions.
I thought I was going to breeze through because I had done everything perfectly and made a great impression.
I had no doubt that I would get the position of my choice, in the company of my choice.
Until I came across an article that talked about HR recruitment trends.
I realized that my transition efforts would not be successful if I did not educate myself on the hiring trends in industry and align my job search and interview preparation accordingly.
So I set out to learn and understand the latest hiring trends and how I could use them to my advantage.
Why Hiring Trends Matter For Your PhD-Level Job Search
PhDs who cultivate an awareness of industry hiring trends will find it easier to create an effective job search strategy.
An effective job search strategy is one that includes the right kind and the right amount of preparations.
For example, Jobvite reports that 83% of recruiters consider culture fit an important factor in making hiring decisions.
So this is one of the things you should be including in your job search preparations.
PhDs can use their knowledge of hiring trends to ask the right questions during informational interviews.
They can narrow down the focus of their preparation by leveraging this knowledge, to areas that matter the most to employers.
They can use it to demonstrate their commercial acumen and commitment to industry career in during networking, interviews and through their online presence. .
According to a CareerBuilder survey, 70% of employers use social media to screen job candidates during the hiring process.
Ultimately, being well-informed of hiring trends in industry will not only help in getting hired in industry, but also in spotting career opportunities after transition.
How Savvy PhDs Integrate Hiring Trends Into Their Successful Industry Job Search
In academia, PhDs are used to working on their own, on independent projects, often purely for the sake of learning.
Industry, however, is market-oriented and only projects that will lead to revenue generation are considered.
This means that anyone considering a career in industry must learn to educate themselves and align their career plans with current and future trends.
Knowledge of hiring trends is extremely important, they are the direction in which most recruitment efforts will align in that company/industry.
PhDs ignorant of hiring trends will face more challenges in finding a great job in industry.
They will find it harder to face the interview, or to convince the company that they have the skills necessary to do the job if hired.
PhDs working toward industry transition should consider these 5 industry hiring trends while crafting their job search strategy…
1. The rise of the machines – AI and data-driven assessment and hiring tools in recruitment.
Hiring new talent is a major decision for a company and employers want to be sure that they are recruiting the right candidate.
Interviewing trends are changing to accommodate new technology and workplace needs.
Diversity and inclusion are now a top priority for recruitment in industry. Employers are using assessment tools to overcome recruitment bias.
The increasing use of Artificial Intelligence (AI) and data-driven tools in HR analytics means that employers now have a new tool at their disposal for evaluating job applicants.
AI is used to assist employers in screening and shortlisting potential job candidates with the desired skills and experience.
Employers are also increasingly using cognitive assessment tools to evaluate potential hires and choose the best fit for the company.
These tools allow screening of job candidates based on not only experience, skills and aptitude but also motivation and behavioral factors.
New tools used in the interview process include assessments conducted in virtual reality (VR), on-the-job auditions, video-based assessment, psychometric tests and casual meetings outside the workplace.
2. Hiring for potential is the new normal.
According to the Bureau of Labour Statistics, the US has over 7.5 million unfilled jobs. This tight labour market has made companies reconsider their hiring criteria.
Companies do not want to miss out on talented hires who do not conform to the traditional academic and career routes.
Instead, employers are increasingly willing to provide competency-based training to develop potential talent for the company.
Hiring decisions based on the potential career trajectory of the job candidate instead of their educational qualifications or experience level is the new normal.
PhDs serious about an industry career must be aware of and prepared for, fierce competition for industry roles.
PhDs must be willing, and able, to demonstrate high value in the job market.
They must build on their rigorous research training and work toward upskilling to meet the needs of the job market.
PhDs with a history of developing new skills are top candidates for any employer in industry.
Transferable skills are key to getting hired in industry.
According to a report by LinkedIn, the top transferable skills valued by employers include creativity, persuasion skills, teamwork, adaptability and time management.
PhDs that able to demonstrate their transferable skills are top candidates for any industry role.
3. Videos are here to stay.
Videos are now the employers’ top choice for communicating their brand to potential hires and for screening job candidates.
LinkedIn research shows that over 75% of job seekers research the company’s employer brand before applying.
Job candidates are increasingly selective about the roles and companies they join.
Companies are increasingly investing in employer branding to attract the best candidates and retain their best employees.
Employer branding refers to a company’s reputation and popularity as an employer. It also shows its employee value proposition.
Companies with a poor employer brand struggle to attract and retain talented employees.
Videos are great as tools to communicate the employer brand and company culture.
More and more companies are leveraging videos in their recruitment branding strategy to communicate their employee value proposition to talented job seekers.
A company’s videos are a glimpse into its organizational culture.
Savvy PhDs can leverage this information in their resume and interview preparation.
PhDs can use information gathered from company videos and other sources to develop discussions during networking and informational interviews.
Informational interviews and a core component of job search strategy.
PhDs who are able to demonstrate knowledge of the company in informational interviews improve their chances of generating referrals and getting hired.
4. Social recruiting is big.
Social recruiting refers to the use of social media platforms for recruiting.
Employers are using various social media channels including Facebook, LinkedIn and Twitter, as well as blogs and platforms like Glassdoor to attract new talent.
More and more companies are proactively reaching out to high-calibre candidates via social media to vet them, and build a relationship with them.
In fact, social recruiting is one of the top 10 recruiting trends.
Previously companies hired talent locally, but with increased remote working options or limited talent pool for skilled job, recruiters are reaching out to international candidates to fill vacancies.
PhDs can leverage social media to make their career transition into industry, and land the industry role of their choice.
The social media footprint of candidates provide a wealth of information about their character, and if they are a good fit for the role and company.
According to a CareerBuilder survey, 70% of employers use social media to screen job candidates during the hiring process.
PhDs able to convey their personal brand through a polished, industry-oriented LinkedIn profile and social media networking improve their chances of getting hired.
Building a personal brand will highlight their key skills, demonstrate their business acumen and attract recruiters.
5. It is a candidate-driven job market.
This is music to the ears of any highly-skilled job seeker.
Research shows that 90% of recruiters believe that the labor market is candidate-centred and is likely to remain so in the future.
Job candidates have no hesitation in turning down job offers that they find unsuitable or less exciting.
Many companies are now competing not only for recruitment but also for retention of talent.
Great candidates can easily gain multiple job offers by being strategic in their approach.
Recruitment focus is shifting from what the employer wants to what the candidate wants.
So the outlook for PhDs seeking to transition into an industry career is…..great!
This is the right time to make the switch.
This is the best time for transitioning into industry.
All PhDs need is an excellent job search strategy that they execute perfectly.
Because, although the market is candidate-driven, employers are still careful in their recruitment.
Companies want to be certain that the candidate they hire has not only the right technical skills but also the soft skills required for the job.
They want to be sure that the candidate will be committed to the job and company if hired.
So what can PhDs do to take advantage of this candidate-driven job market and be successful in getting hired?
PhDs keen to transition into industry should take these steps in their job search strategy:
1. Be committed to industry transition, and ditch the academic mindset.
2. Develop their business acumen to understand business dynamics and demonstrate value to employers.
3. Develop their soft skills, as teamwork, negotiation, time management and communication are essential, and emotional intelligence highly valued in industry.
4. Create the job search strategy that will get them hired even in a competitive job market.
Understanding the latest trends and developments in hiring is essential to getting hired. It is how you can prepare yourself to shine in interviews. Don’t leave this to chance, do your research. A few of the trends you should be aware of include the rise of the machines – AI and data-driven assessment and hiring tools in recruitment, hiring for potential is the new normal, videos are here to stay, social recruiting is big and it is a candidate-driven job market. These trends and others are influencing how hiring managers make decisions. The more you know the higher your chances of getting hired.
To learn more about the Top 5 Hiring Trends Savvy PhDs Leverage To Get Hired In Industry, including instant access to our exclusive training videos, case studies, industry insider documents, transition plan, and private online network, get on the wait list for the Cheeky Scientist Association. | https://cheekyscientist.com/hiring-trends-that-help-you-get-hired/ |
"No award or honour, no matter my admiration for the person for whom it was named, means so much to me that I would forfeit the right to follow the dictates of my own conscience."
J.K. Rowling is handing back a prestigious humanitarian award after a disagreement with its namesake's daughter.
The Harry Potter author announced on her website on Thursday that she was returning the Robert F. Kennedy Human Rights Ripple of Hope honor to the organization's president, Kerry Kennedy, after the late senator's daughter criticized her over what she called "deeply troubling transphobic tweets and statements."
The award, given to those who show "commitment to social change", was presented to Rowling in December for the work of her children's charity, Lumos. She called it at the time "one of the highest honors I've ever been given".
J.K. Rowling Among Writers Trying to Cancel 'Cancel Culture' After Facing It Over Anti-Trans CommentsView Story
But following a series of controversies surrounding the author's stance on gender, Kennedy lashed out at her own honoree.
"I have spoken with J.K. Rowling to express my profound disappointment that she has chosen to use her remarkable gifts to create a narrative that diminishes the identity of trans and nonbinary people," she wrote in a statement, listing numerous examples that have led to the Brit being branded as a TERF (Trans-Exclusionary Radical Feminist) by her critics.
"From her own words, I take Rowling’s position to be that the sex one is assigned at birth is the primary and determinative factor of one’s gender, regardless of one’s gender identity—a position that I categorically reject," Kennedy wrote. "The science is clear and conclusive: Sex is not binary."
While she stopped short of revoking the award, she didn't have to — as Rowling handed it back three weeks later.
"Kerry Kennedy, President of Robert F Kennedy Human Rights, recently felt it necessary to publish a statement denouncing my views on RFKHR’s website. The statement incorrectly implied that I was transphobic, and that I am responsible for harm to trans people," she wrote in a statement on her website.
JK Rowling Deletes Praise of Stephen King After He Tweets 'Trans Women Are Women'View Story
"As a longstanding donor to LGBT charities and a supporter of trans people’s right to live free of persecution, I absolutely refute the accusation that I hate trans people or wish them ill, or that standing up for the rights of women is wrong, discriminatory, or incites harm or violence to the trans community."
She said she had received thousands of private emails of support from people from both within and without the trans community, who have expressed fear of voicing an opinion because of what they felt was a toxicity surrounding the discussion.
"RFKHR has stated that there is no conflict between the current radical trans rights movement and the rights of women. The thousands of women who've got in touch with me disagree, and, like me, believe this clash of rights can only be resolved if more nuance is permitted in the debate," she wrote.
"In solidarity with those who have contacted me but who are struggling to make their voices heard, and because of the very serious conflict of views between myself and RFKHR, I feel I have no option but to return the Ripple of Hope Award bestowed upon me last year."
"I am deeply saddened that RFKHR has felt compelled to adopt this stance, but no award or honour, no matter my admiration for the person for whom it was named, means so much to me that I would forfeit the right to follow the dictates of my own conscience."
#IStandWithJKRowling was trending soon afterwards — although the hashtag was used by people who both did and didn't.
Got a story or a tip for us? Email TooFab editors at [email protected]. | https://toofab.com/2020/08/28/jk-rowling-returns-robert-f-kennedy-humanitarian-award/ |
T he BBC's first adaptation of JK Rowling's series of Strike novels was a ratings hit last weekend, with two more adaptations on the way. But according to Rowling, it's only the beginning of what. In an interview with the BBC, Rowling explained that she had warned actor Tom Burke, who stars as private investigator Strike in the adaptation of her books, that he could be playing the character. I love this show almost as much as the books. As most folks know, Robert Galbraith is the pseudonym of JK Rowling. The show follows the books fairly well and my only complaint is that each book didn't have more episodes. The.
The BBC's new adaptations of JK Rowling's Cormoran Strike novels haven't been met with much disdain from hardcore fans, however, despite a smattering of alterations made in the transition from. The Cormoran Strike books are a crime fiction series written by JK Rowling, but published under the pseudonym Robert Galbraith. Prior to this novel, J.K. Rowling has released The Cuckoo’s.
The first three books in the Strike series have been adapted for television, produced by Brontë Film and Television. J.K. Rowling’s original intention for writing as Robert Galbraith was for the books to be judged on their own merit. 誕生 ジョアン・ローリング Joanne Rowling 1965年 7月31日(54歳)イングランド グロスタシャー州 イェイト (英語版) 職業 児童文学作家 国籍 イギリス 主題 ファンタジー 代表作 『ハリー・ポッターシリーズ』 主な受賞歴 ネスレ・スマー.
About J.K. Rowling Joanne Rowling, also known as J.K. Rowling is the author of the Harry Potter series, one of the most successful book series ever in the history of humankind. The reason for she writing as J.K. Rowling is because. J.K. Rowling has 244 books on Goodreads with 29549244 ratings. J.K. Rowling’s most popular book is Harry Potter and the Sorcerer's Stone Harry Potter, 1. J.K. Rowling has 244 books on Goodreads with 29549244 ratings. J. J.K. Rowling is the author of the much-loved series of seven Harry Potter novels, originally published between 1997 and 2007. Along with the three companion books written for charity, the series has sold over 500 million copies, been. List of all J.K. Rowling books in order. This is a complete printable listing of all J.K. Rowling books and lists the newest J.K. Rowling book. JK Rowling was born Joanne Rowling in the town of Yate, Gloucestershire in 1965. It wasn’t.
Here is the list of all the books written by jk Rowling including harry potter books in chronological order and her latest novel. 12. The Cuckoo’s Calling The Cuckoo’s Calling It is the first book of the Cormoran Strike series. JK Rowling. JK Rowling is one of the most well-known authors of our time, but did you know she also writes under the pseudonym of Robert Galbraith? But why does she write under this name and what is the story.
2019/12/27 · Cormoran Strike Book 5 by J.K. Rowling and Robert Galbraith and will arrive Fall 2020. Here's what we know about the novel's release date. It’s impressive that it is already finished. In May.J.K. Rowling is of course the infamous English author, who wrote the Harry Potter series of books I still prefer Enid Blyton!. Probably the most famous author in the world, Rowling was the first author to become a billionaire. Following the success of the TV series Strike, JK Rowling’s latest Robert Galbraith novel is more eagerly anticipated than ever. Read an exclusive extract here Following the success of the TV.
— J.K. Rowling @jk_rowling November 23, 2018 If you need reminding, Al is the only Rokeby sibling that Strike has much contact with; we first see him in The Silkworm where he eagerly helps out Strike in the last stages of the. Looking for books by J.K. Rowling? See all books authored by J.K. Rowling, including Harry Potter and the Philosopher's Stone, and Harry Potter and the Chamber of Secrets, and more on. J.K. Rowling is best known.
J.K. Rowling Books Checklist: Reading Order of Harry Potter Series, Cormoran Strike Series, Harry Potter Companion and list of all J.K. Rowling Books Over 20 Books. Cormoran Strike book 4 release date: JK Rowling drops HUGE bombshell on Lethal White JK ROWLING has issued a big update about the status of her. | http://reunion-multimedia.com/strike-jk-rowling-books |
UPDATED (9/16):
A spokesperson for J.K. Rowling has denied speculation that the embattled author’s male pseudonym, Robert Galbraith, was inspired by a famous conversion therapist.
On Friday, Rowling released Troubled Blood, the latest series in her series of Cormoran Strike novels, under the pen name Robert Galbraith. The 55-year-old writer wrote four previous novels under the alias, which was originally intended to distinguish her adult-oriented fare from the Harry Potter series. Although Rowling did not originally disclose that she was the woman behind the nom de plume, a computer program reportedly unmasked the author’s true identity.
The particular choice in pseudonym, however, aroused suspicions earlier this year after Rowling penned a series of transphobic tweets, which were later followed by a 3,000-word op-ed attacking the trans rights movement. It is awfully close to Robert Galbraith Heath, a conversion therapist who pioneered the since-discredited use of shock treatments to “cure” homosexuality.
But after Troubled Blood came under fire earlier this week for a transphobic subplot in which a serial killer hunts his victims while dressed in women’s clothing, Rowling denied that the alias is a reference to “ex-gay” therapy. Rowling “wasn't aware of Robert Galbraith Heath” when selecting the name, a representative said.
“Any assertion that there is a connection is unfounded and untrue,” the unnamed spokesperson added in a statement to Newsweek. | https://www.them.us/story/jk-rowlings-pen-name-also-name-of-anti-lgbtq-conversion-therapist |
“Neither party has made false accusations for financial gains. There was never an intent of physical or emotional harm,” it continued, finishing by saying Heard will donate the financial gains from the divorce to charity.
For the record, this was our FULL joint statement.To pick&choose certain lines & quote them out of context, is not right.Women, stay strong. pic.twitter.com/W7Tt6A3ROj
— Amber Heard (@realamberheard) December 8, 2017
While Heard did not directly reference the recent outcry over Depp’s casting as the titular character in the upcoming “Fantastic Beasts” sequel, “Fantastic Beasts and Where to Find Them: Grindelwald’s Crimes,” her statement comes after J.K. Rowling posted a defense of Depp’s casting on her official website.
“Based on our understanding of the circumstances, the filmmakers and I are not only comfortable sticking with our original casting, but genuinely happy to have Johnny playing a major character in the movies,” Rowling said Thursday.
Also Read: 'Fantastic Beasts' Director Defends Casting of Johnny Depp: 'It's a Dead Issue'
Warner Bros. also issued a statement, saying: “Based on the circumstances and the information available to us, we, along with the filmmakers, continue to support the decision to proceed with Johnny Depp in the role of Grindelwald in this and future films.”
After the first cast photo was released for the sequel, fans online were urging the studio to recast Depp, similar to Kevin Spacey’s replacement in “All the Money in the World” by Christopher Plummer. | https://www.thewrap.com/amber-heard-posts-tweet-in-wake-of-johnny-depps-fantastic-beasts-casting-backlash/ |
J.K. Rowling is the author of the record-breaking, multi-award-winning Harry Potter novels. Loved by fans around the world, the series has sold over 450 million copies, been translated into 78 languages, and made into 8 blockbuster films. She has written three companion volumes in aid of charity: Quidditch Through the Ages and Fantastic Beasts. JK Rowling Email To contact J.K Rowling via email, use the following email address: [email protected] Scholastic was JK Rowling’s first publisher in the US, after she had been rejected twelve times in the UK by the first twelve publishers she approached. Now Robert Galbraith’s true identity is widely known, J.K. Rowling continues to write the crime series under the Galbraith pseudonym to keep the distinction from her other writing and so people will know what to expect from a Cormoran Strike novel. The Robert Galbraith website can be found here Il confronto finale con Voldemort è imminente, una grande battaglia è alle porte e Harry, con coraggio, compirà ciò che dev’essere fatto. Mai i perché sono stati così tanti e mai come in questo libro si ha la soddisfazione delle risposte. Giunti all’ultima pagina si vorrà rileggere tutto daccapo, per chiudere il cerchio e ritardare.
26/06/2019 · It’s been a long time coming, but finally we have a definitive answer. JK Rowling is a TERF. There have been multiple instances wherein the infamous Harry Potter author demonstrated solidarity with radical feminists who have waged a vicious smear campaign against transgender women. Once, she. J.K. Rowling Verified account. @jk_rowling usa o inbox do google, mulher. é tão mais prático e rápido pra responder emails. 4 replies 1 retweet 38 likes. Reply. 4. @jk_rowling this is why Twitter is so much better, you can just like tweets. 1 reply 0 retweets 19.
JK Rowling Net Worth 2019 From Rags To Riches. She had lived a life of rags to riches. She had faced poverty until the release of her first novel in the series Harry Potter and the Philosopher Stone in 1997. 19/05/2006 · Jk Rowling does not have a public email id. So you better send her a letter to the address: J.K.Rowling. C/o. Bloomsbury Publishing. Plc 38 Soho Square. The Volant Charitable Trust was set up by J.K. Rowling in 2000 as a grant making trust to support Scottish charities, groups and projects, both national or community-based, which help alleviate social deprivation, particularly concerned with women, children and young people at risk. J.K. Rowling Facts 1. Try, Try Again. Why is it that all the greats are rejected at first? J.K. Rowling’s first Harry Potter manuscript was rejected by a whopping 12. 13/11/2016 · BIOGRAFIA DI J.K. ROWLING Scrittrice e sceneggiatrice. Il padre è un ingegnere meccanico della Rolls Royce, mentre la madre è metà scozzese e metà francese. Ha una sorella minore, Diana. Joanne racconta di essere sempre stata una sognatrice ad occhi aperti e di aver cominciato a scrivere storie già a sei anni, per intrattenere i familiari.
Sign in - Google Accounts. 17/04/2019 · Harry Potter author interview with J.K. Rowling. For those who enjoy my videos please donate $1 to garland3688@ on PayPal so that I can continue to be able to afford medical bills and obtain rare videos. God Bless! Also, if anyone can get a hold of other rare J.K. Rowling pre-2001 videos that are not available online, please. Visita subito Google Play. J. K. Rowling. Salani, 2013. 3 Recensioni. In occasione dei 15 anni di Harry Potter, viene proposto il cofanetto dell'intera serie, composto da sette volumi: Harry Potter e la pietra filosofale, Harry Potter e la camera dei segreti, Harry Potter e il prigioniero di Azkaban, Harry Potter e il calice di fuoco, Harry. There was something irresistible to me about his name, and the idea that such a brilliant woman might be a distant relative of the buffoonish McGonagall.' - J.K. Rowling Pottermore Presents is a collection of J.K. Rowling's writing from the Pottermore archives: short reads originally featured onwith some exclusive new additions. 21 Massive Things J.K. Rowling Has Revealed About "Harry Potter" On Twitter. We've all been pronouncing Voldemort's name wrong. Posted on July 31, 2017, 18:52 GMT Jen Abidor. BuzzFeed. J.K. Rowling @jk_rowling. 20 years ago today a world that I had lived in alone was suddenly open to others. It's been wonderful. Thank you.
She's a BIG advocate of writing letter and not emails. Here's the address to which you can send your fan-mail. NOTE: She won't autograph your copies of her novels. She won't answer questions related to her novels or specific topics. She. She is the creator of probably the most famous - and certainly the best-loved character in contemporary fiction. She is also the author of her own escaperom an existence on the brink of poverty, with no job and few prospects. Onhe one hand there is J.K. Rowling, who wrote, and continues to write, thearry Potter novels,a literary phenomenon. O. Now an internationally renowned author of the best-selling Harry Potter series, J. K. Rowling wrote her first book?Harry Potter and the Sorcerer's Stone?while her young daughter napped. After being turned down several times, Rowling finally found a publisher. In the years since, her writing has earned her the Hugo Award, the Bram Stoker Award. J.K. Rowling ha celebrato l'evento in Scozia, nel castello di Edimburgo che per l'occasione si è "trasformato" nella scuola di Hogwarts. Davanti a una platea di 70 bambini vincitori di un concorso, l'autrice ha letto il primo capitolo. Anche in Italia i negozi sono rimasti aperti nella notte. J.K. Rowling 2015. “Very Good Lives: The Fringe Benefits of Failure and the Importance of Imagination”, p.25, Hachette UK 129 Copy quote. Always have a vivid imagination, for you never know when you might need it. J. K. Rowling. Vivid Imagination, Needs, Might.
JK Rowling - Ebook written by Cari Meister. Read this book using Google Play Books app on your PC, android, iOS devices. Download for offline reading, highlight, bookmark or take notes while you read JK Rowling. 15/09/2011 · J.K.Rowling at Harvard Commencement. Read the transcript of her speech: bit.ly/1zeUPfA. This annotated sketch by J.K. Rowling shows the layout of Hogwarts School of Witchcraft and Wizardry, complete with the giant squid that lives in the lake. In an accompanying note to her editor, J.K. Rowling stated, ‘This is the layout as I’ve always imagined it’. From the Sorting Hat to wand woods, the Marauder’s Map and more. Discover the writing and content released by J.K. Rowling onnow available on. | http://dopln.sk/Jk%20Rowling%20Gmail |
Rowling: ‘I had no intention of killing Lupin’
Entertainment Weekly has released a new article on the upcoming Harry Potter and the Deathly Hallows – Part 2 DVD/Blu-ray. It reveals a bit more of the conversation between actor Daniel Radclife and author J.K. Rowling.
It is clear this talk will provide some more backstory and reasoning that fans have been eager for. After talking about killing Ron, Rowling now touches on the fates of Hagrid and Professor Lupin.
“Rowling telld Radcliffe that the image of Hagrid cradling ‘dead’ Harry – a bookend moment to the beginning of the series, when Hagrid brought infant Harry to the Dursleys – stuck with her the entire time she wrote the books and she never let it go. If she had, Rowling says Hagrid would have been a ‘natural’ target for elimination. ‘That image kept him safe,’ she says.”
“On the DVD, Rowling shares with Radcliffe that when she created Lupin’s character, she planned for him to survive the events of the finale. While the author has said as much in other interviews, here, she elaborates, explaining that she changed her mind when she realized that her last Harry Potter story was really about war, and that ‘one of the most horrifying things about war is how it leaves children fatherless and motherless.’ The most powerful way she could dramatize that idea, she says, was to kill a set of parents that were dear to readers. ‘I had no intention of killing [Lupin],’ says Rowling. “But then it dawned on me he had to die.”
EW also hints at much more! For all we have on the upcoming Blu-ray/DVD release, head over to our Movies section.
Thanks to Hypable for the tip! | http://www.mugglenet.com/2011/11/rowling-i-had-no-intention-of-killing-lupin/ |
Reactions To J.K. Rowling’s Johnny Depp Statement Make Clear Where Her Fans Draw The Line
Harry Potter has long maintained a mass of fervent, devoted followers, but reactions to J.K. Rowling's Johnny Depp statement show that even a fandom that faithful has its boundaries, and now, they've near-unanimously been crossed. After an extended period of silence, Rowling finally responded to urges to address Depp's casting in Fantastic Beasts: The Crimes of Grindelwald, for which he'll reprise his role as the titular dark wizard after making a cameo in last year's Fantastic Beasts and Where To Find Them.
The decision to keep Depp onboard has been widely controversial, as the first Fantastic Beasts arrived in theaters only months after his ex-wife Amber Heard accused him of domestic abuse during their 2016 divorce. Depp ended up agreeing to a $7 million settlement, and in a joint statement released to TMZ, both actors claimed that Heard planned to donate the money to charity and that "there was never an intent of physical or emotional harm."
Nonetheless, fans have been skeptical about Depp's involvement, prompting the film's team to publicly address the backlash. Fantastic Beasts director David Yates called the matter a "dead issue" during an interview with Entertainment Weekly late last month, and on Thursday, Rowling also weighed in. "I'm saying what I can about the Grindelwald casting issue here," she tweeted along with a link to a longer statement on her website. (Rowling's rep told Bustle this is the only comment she will be making on the matter.) Rowling wrote that the allegations surrounding Depp "deeply concerned" her, and that while they considered recasting, they ultimately decided to keep Depp in the part. She continued:
Viewers, however, were outraged by Rowling's response, and soon took to Twitter to air their disdain.
Some Compared Her To Lena Dunham
Lena Dunham and JK Rowling racing to see who can release the most problematic, tone-deaf and faux-feminist statement of 2017 https://t.co/OBvM9HFOF0— (@louisstaples) #
JK Rowling joins Lena Dunham and Kate Winslet in the "I believe women except for when I don't feel like it" dustbin.— (@lizzylaurie) #
Dunham, an outspoken feminist, also came under fire last month after supporting Girls writer Murray Miller when he was accused of sexual assault. (Murray has denied the claims, but Dunham later apologized for coming to his defense.)
These People Called For A Boycott
dont care if you love harry potter since you were a child, jk rowling and the filmmakers of fantastic beasts not only support an abuser on the cast, but ARE HAPPY with him there do to give your money to this film dont watch on the movie theater thats the only way to affect them— (@gvnegirl) #
JK Rowling is cancelled for me. Good luck supporting domestic abusers, as for me I'm going to be very happy not supporting that movie.— (@winterromanoff) #
That's one way to protest.
While Others Pointed Out The Impact Of Rowling's Words
@kuaku_yushi @jk_rowling I think the worst part is that if you check the #JohnnyDeppIsMyGrindelwald tag so many people have interpreted JKR's statement as some kind of "proof" that the accuser is a liar. This incredibly irresponsible attitude is exactly what deters victims from reporting abuse. #MeToo— (@cafwee) #
@caseimz333 @jk_rowling He's an abuser. And what's his punishment? Being part of a big movie, in a big caracter, because they keep protecting him. That's why the fans don't want him as Grindewald, we don't want abusers being protected anymore.— (@izascaramuza) #
This is about so much more than one movie.
Some Referenced Her Work
there will be a time when we must choose between what is easy and what is right" - dumbledore, GoF jk rowling needs to go back and read her own views cause she's obviously forgotten...or she didn't believe in it in the first place— (@dracomallfoys) #
dear @jk_rowling, I'm glad you have created a new Patronus, the hipopocrita.— (@oivoldemort) #
Added another Twitter user: "jk rowling legacy [sic] is literally about a boy who has been emotionally and physically abused and the danger in looking the other way bc the truth is inconvenient."
Or Outright Reclaimed It For Their Own
jk rowling is canceled, harry potter belongs to the fans now and it's not hers anymore— (@ria_tee) #
This is a full-on revolution.
And Many Unsubscribed From The Fandom
jk rowling has worked so systematically and methodically to destroy my love of her creation. it's fascinating.— (@kendrajames_) #
Me about JK Rowling— (@chriisevans) #
For some, this was the final straw.
But Mostly, Fans Were Just Flat-Out Disappointed
@jk_rowling Absolutely unacceptable. You cannot claim to support women & keep an alleged domestic abuser in such a prominent role in one of the biggest franchises in Hollywood. To give him a continued voice only perpetuates & normalizes the problem within the industry. I'm so disappointed.— (@katwquinn) #
This is an intensely disappointing word salad from JK Rowling re Johnny Depp. It manages to say nothing & yet give Depp a pass in the most evasive, passive way possible. Ugh. https://t.co/bZ5Sy9NfqH— (@moryan) #
Rowling is generally quite beloved, so this came as a major blow.
And At Least One Person Damned The Entire Year
I can't believe JK Rowling is cancelled. 2017 is really a garbage year.— (@martyschmarty) #
Is it 2018, yet?
Of course, there were some people in Rowling's court. "To all the ignorant people that keep [attacking] Jk Rowling after her statement ... GET OVER IT! [IT'S] OBVIOUS SHE KNOWS MORE THAN YOU THINK ABOUT THE CASE," tweeted one user. And another wrote: "JK Rowling clearly implies in her statement that she has inside info about allegations against Johnny but you still are too proud to admit that you are in the wrong side."
But by and large, the message is clear: Rowling has landed on the bad side of her fandom, and from the sounds of it, the Harry Potter franchise might not be strong enough to withstand it. | https://www.bustle.com/p/reactions-to-jk-rowlings-johnny-depp-statement-make-clear-where-her-fans-draw-the-line-7507809 |
I wrote up our Scotland holiday recently (Part 1, Part 2, Part 3), but here’s a little bonus post for the HP fangirls out there like myself.
I started writing this post in the bar of Edinburgh’s famous Balmoral Hotel, where J.K. Rowling lived while she wrote the final Harry Potter book.
As I sat there with my laptop, propped up by a tartan-plaid pillow, sipping a very overpriced cocktail, the staff and other patrons were probably thinking, “Wow, she could be the next J.K. Rowling, writing the next big thing!”
Or… maybe not. Maybe she’s just writing a blog that only her parents read. Let’s give the poor girl some complimentary bar snacks.
Anyhow, in addition to the Balmoral Hotel, there are quite a few Harry Potter-related sights in Scotland, and M and I saw several of them on our trip:
Glencoe
The third movie, Harry Potter and the Prisoner of Azkaban, filmed the scenes outside Hagrid’s hut on location here. It’s stunningly gorgeous, and we had lunch here in the hikers’ inn/pub after our hike.
Jacobite Steam Train
The scenes of the Hogwarts Express with the train steaming its way north to Hogwarts along dramatic scenery and across the 21-arched Glenfinnan viaduct were filmed in Scotland. We didn’t actually go see this ourselves, as it’s a bit farther north than where we were, but you can actually take a ride on a steam train along the route (the train doesn’t look like the red Hogwarts Express, though; you can see that on the Warner Brothers Studio Tour.)
Edinburgh
Edinburgh is where J.K. Rowling primarily wrote the books, and we saw more than a few people walking around wearing their HP fan attire, from Chudley Cannons t-shirts to full-on wizard robes.
The Elephant House Cafe
The self-proclaimed “birthplace of Harry Potter,” this is the coffee shop where J.K. Rowling spent time writing the first book, back when she was penniless. Lots of fans flock here and there are now photos of her sitting there (posed, after she became famous) up in the restaurant, and supposedly there’s a lot of HP-themed fan graffiti on the walls of the loos (I didn’t actually go in to confirm this).
Grey Friars Kirkyard
This is definitely one of the spookiest graveyards I’ve been to, and I can imagine Ms. Rowling wandering around here and drawing inspiration from the place. In fact, there’s a Thomas Riddell buried here. She’s confirmed that there could be a subconscious connection there, but hasn’t said outright that’s where she got the name for the most dastardly wizard of all time. (There’s also a McGonagall buried in here.)
Victoria Street
This curved, whimsical street was the inspiration for Diagon Alley—there’s even a joke shop. It now has a shop selling HP merch, called Diagon House, and a queue out front to get in.
J.K. Rowling’s Hand Prints
The City Chambers building along the Royal Mile features the bronze hand prints of winners of the prestigious Edinburgh Award, which Rowling won in 2008.
George Heriot’s School
This turreted, castle-like 17th-century school is said to be the inspiration for Hogwarts.
Balmoral Hotel
Finally, the Balmoral Hotel, where she lived while she wrote the last book. The suite she inhabited has been renamed in her honour, and now bears an owl-shaped door knocker and includes the writing desk she used during her stay. And costs somewhere around £1,000 a night. A £15 cocktail doesn’t seem quite so bad now. | https://walksbetweenthecommons.com/2017/10/17/harry-potter-sights-in-scotland/ |
Joanne Rowling, CH, OBE, FRSL, FRCPE who writes under the pen names J. K. Rowling and Robert Galbraith, is a British novelist and screenwriter who wrote ...
www.jkrowling.com
www.jkrowling.com/about
Joanne Rowling was born on 31st July 1965 at Yate General Hospital just outside Bristol, and grew up in Gloucestershire in England and in Chepstow, Gwent, ...
twitter.com/jk_rowling?lang=en
9387 tweets • 483 photos/videos • 13.5M followers. "From @nytimes | Where Brexit Hurts: The Nurses and Doctors Leaving London https://t.co/cN5O30QRxO"
www.biography.com/people/jk-rowling-40998
Nov 15, 2017 ... J.K. Rowling is the creator of the Harry Potter fantasy series, one of the most popular book and film franchises in history. Learn about her story ...
www.britannica.com/biography/J-K-Rowling
J.K. Rowling, in full Joanne Kathleen Rowling (born July 31, 1965, Yate, near Bristol, England), British author, creator of the popular and critically acclaimed ...
www.imdb.com/name/nm0746830
J.K. Rowling, Writer: Harry Potter and the Deathly Hallows: Part 2. Joanne Rowling was born in Yate, near Bristol, a few miles south of a town called Dursley ...
www.biographyonline.net/writers/j_k_rowling.html
J.K Rowling was born in Chipping Sodury, July 31st 1965. Her childhood was generally happy, although she does remember getting teased because of her ...
www.telegraph.co.uk/culture/books/booknews/9564894/JK-Rowling-10-facts-about-the-writer.html
Sep 27, 2012 ... You may be a Harry Potter fanatic, but how much do you know about the author JK Rowling herself? Here are 10 facts about the author of the ... | https://www.ask.com/web?qsrc=3048&o=41647999&oo=41647999&l=dir&gc=1&q=Joanne+Kathleen+Rowling |
This puzzle is a Nurikabe. It is a little tougher than my previous one, which was designed for newer solvers, and it has a particularly squiggly solution that I found pleasing enough to post - hence the name! I hope you enjoy.
Rules of a Nurikabe (copied from my previous puzzle):
This is a Nurikabe puzzle. The goal is to paint some cells black so that the resulting grid satisfies the rules of Nurikabe:
- Numbered cells are white. (Think of them as "islands.")
- White cells are divided into regions, all of which contain exactly one number. The number indicates how many white cells there are in that region.
- Regions of white cells cannot be adjacent to one another, but they can touch at a corner.
- Black cells must all be orthogonally connected. (Think of them as "oceans.")
- There are no groups of black "ocean" cells that form a 2×2 square anywhere in the grid.
Now, here is the puzzle:
And here is a puzz.link solver for your solving convenience. | https://puzzling.stackexchange.com/questions/106600/nurikabe-the-twisty-corridors |
Rules:
Paint some cells black to create a continuous wall.
Number(s) in a cell indicate the length(s) of the wall in its surrounding cells.
If there is more than one number in a cell, there must be at least one white cell between the wall parts.
The wall cannot form any 2x2 blocks anywhere.
Example with solution: | http://forum.ukpuzzles.org/viewtopic.php?f=10&t=219&p=2627&sid=093d4fd116ecff274edaec556690e2c0 |
This is a Statue Park puzzle (originally constructed for the 2019 24-Hour Puzzle Championship, as part of a Tarot card themed set -- no prizes for guessing which rank this puzzle was).
Rules of Statue Park:
- Shade some cells of the grid to form the given set of pieces. Pieces may be rotated or reflected.
- Pieces cannot be orthogonally adjacent (though they can touch at a corner).
- All unshaded cells must be (orthogonally) connected.
- Any cells with black circles must be shaded; any cells with white circles must be unshaded. | https://puzzling.stackexchange.com/questions/84104/statue-park-five |
TAPA is an acronym for Turkish Art Paint, and was developed by Serkan Yürekli for the 2007 Diogenes Internet Puzzle Solvers Club.
Puzzle and Goal
An unsolved puzzle consists of a rectangular grid, some of which contain one or more numbered clues.
The goal is to color some cells to satsify the clues.
Rules
The solved grid must satisfy the following conditions:
All black cells form a single orthogonally contiguous area.
Clues represent the number of black cells in adjacent cells. For instance, a clue of 2 1 means that there are two black cells together, then some white cells, then a single black cell, than some more white cells, in the eight (or five) adjacent cells.
Black cells cannot completely cover any 2x2 area.
Variants
Janko.at provides several variations, including using hexagons, having clues that represent mathematical differences, and so on.
Playable Online
BrainBashers
Rätsel, Puzzles und anderer Denksport (Brainteasers, Puzzles, and other Mindgames)
Original content ©Paul Hartzer unless otherwise noted.
Please
contact me
with any comments, questions, or recommendations. | http://curiouscheetah.com/Museum/Puzzle/Tapa |
Masyu is a rarely seen logic puzzle from Japan that is great fun and is sure to have you scratching your head!
The instructions are as follows: the grid contains white and grey circles. You must create a single continuous loop passing through all the cells that are circled on the grid and some other cells based on the rules. The loop must go through each cell it passes through in its center and leave the cell from a different side. Cells containing circles have special significance:
White circles denote that the loop must go straight through that cell without turning in it, however in the next and/or previous cell to that containing the white circle the loop must turn (all turns are 90 degrees). Black cells in contrast denote that the loop must turn inside that cell, but the loop must travel through both the previous and next cells without turning. | http://www.thepuzzleclub.com/masyu/masyu.php |
.
Quiz
iOS
Android
More
4 Basic Types of Tissues
Epithelial
Connective
Muscle
Nervous
Tissue
collection of cells that perform a common function.
What is
Epithelial
tissue?
Cells that form a tight, continuous network
Where is
Epithelial
tissue found?
Surfaces of the body and organs
lining of body cavities
Function of
Epithelial
tissue?
protection
absorption
secretion
Epithelial
cell shape (3)
Squamos
cuboidal
columnar
Epithelial
cell layering (3)
Simple
- 1 layer
Stratified
- multiple layers
Psuedostratified
- falsely stratified
What is
Connective
tissue?
Cells widely separated from each other in a matrix.
Function of
Connective
tissue-
form & function
binding
support
2 matrix regions
Ground substance- liquid, gel, gum
Fibers- Non elastic & Elastic
Fibrous
Connective Tissue (2)
Loose- takes up space, wrapped around organs
Dense- tendons & ligaments
Supportive
Connective Tissue (2)
Cartilage
Bone
Fluid
Connective Tissue
Blood
3 blood cells
Erythrocytes- red blood cells
Leukocytes- white blood cells
Platelets- form clots
What is the most abundant tissue in most animals?
Muscle Tissue
What is the muscle cell called?
Muscle Fiber
Function of
Muscle
tissue-
contraction
Skeletal
Muscle Tissue (3)
Striated
Multi nuclei
Voluntary
What is the most abundant tissue in chordates?
Skeletal Muscle Tissue
Cardiac
Muscle Tissue
Striated
1-2 Nuclei
Involuntary
Smooth
Muscle Tissue (3)
No Striations
1 Nucleus
Involuntary
What does the
Smooth
muscle tissue line?
Organs
Cavaties- stomach, intestines, uterus, bladder
Function of
Nervous
Tissue-
Conducts impulses from one region to another.
2 Basic
Nervous Tissue
cell types-
Neurons- main part
Neuroglia- all other parts
Importance of
Neurons
-
Structure and functional unit of nervous system
Importance of
Neuroglia
-
Insulates and supports neurons
Author:
lduran8
ID:
307502
Card Set:
Histology
Updated:
2015-09-08 07:14:21
Tags:
Zoology Tissues
Folders:
Zoology
Description:
Explanation of Histology-the study of tissues
Show Answers: | https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=307502 |
The Game of Life is a cellular automata model invented by John Conway. Each cell in the grid represents an organism in a simple ecosystem. The rules for survival and procreation are simple:
Setting the initial state of the grid: Click on individual cells in the grid to toggle their values (from dead/white to alive/black). You can also randomize or clear the grid contents:
Running the simulation: You can click to see one generation at a time, or start the continuous evolution process: | http://www.dave-reed.com/life.html |
Upon homozygosis from a/α to a/a or α/α, Candida albicans must still switch from the 'white' to 'opaque' phenotype to mate. It was, therefore, surprising to discover that pheromone selectively upregulated mating-associated genes in mating-incompetent white cells without causing G1 arrest or shmoo formation. White cells, like opaque cells, possess pheromone receptors, although their distribution and redistribution upon pheromone treatment differ between the two cell types. In speculating about the possible role of the white cell pheromone response, it is hypothesized that in overlapping white a/a and α/α populations in nature, rare opaque cells, through the release of pheromone, signal majority white cells of opposite mating type to form a biofilm that facilitates mating. In support of this hypothesis, it is demonstrated that pheromone induces cohesiveness between white cells, minority opaque cells increase two-fold the thickness of majority white cell biofilms, and majority white cell biofilms facilitate minority opaque cell chemotropism. These results reveal a novel form of communication between switch phenotypes, analogous to the inductive events during embryogenesis in higher eukaryotes. | https://experts.unthsc.edu/en/publications/opaque-cells-signal-white-cells-to-form-biofilms-in-candida-albic |
Colonies on Sabouraud dextrose agar at 25°C are white, undulate, dull, and radially furrowed. Colony size is 10 mm after 10 days incubation.
Microscopic morphology
On cornmeal following 72 hours incubation at 25°C, it produces primarily true hyphae that disarticulate into rectangular arthroconidia measuring approximately 3-4 x 4-8 µm. Yeast cells are seen sparingly. Appressoria are absent , .
Special notes
This isolate is urease positive and grows on media containing cycloheximide. The type strain was isolated from human skin. This species is most frequent associated with superficial mycoses , although it has been noted as an etiological agent in a nosocomial fungemia . Trichosporon asteroides may be distinguished from N/A(L):T. asahii by its primarily filamentous morphology and from N/A(L):T. inkin by it inability to form appressoria. Medically relevant Trichosporon species may also be identified based on sequences of the internal transcribed spacer regions . | https://drfungus.org/knowledge-base/trichosporon-asteroides/ |
This puzzle is a new Sudoku variant that I stumbled into while experimenting with multiple grids. If further experimentation yields interesting and fun interactions, then you may see more of these. Hope you enjoy!
Rules:
This puzzle consists of two 4x4 Sudoku grids (thick bordered regions) and two external cells.
Place the digits 1 to 9 in cells so that a unique set of four digits appears in each of the two 4x4 Sudoku grids, and the set of those digits appear in every row, column, and 2x2 of their respective grid (eight digits total appear within the two 4x4 grids). The remaining digit appears in both external cells.
Here's a possible configuration:
Digits along arrows must sum to the digit in the corresponding circle.
Cells separated by a white dot must contain contain consecutive digits. All dots may or may not be given.
You can solve online at Penpa+
Lösungscode: Row 1 spanning left to right across both 4x4s (8 digits) followed by Row 4 (another 8 digits)
am 2. Dezember 2021, 01:50 Uhr von filuta
Beautiful little puzzle, thanks for sharing it. | https://logic-masters.de/Raetselportal/Raetsel/zeigen.php?id=0008F6 |
Purpose: To isolate progenitor cells from rabbit corneal epithelial cells (CEC) in serum- and feeder layer-free culture conditions and to compare the self-renewal capacity of corneal epithelial progenitor cells obtained from the central and limbal regions of the cornea.
Methods: Tissue samples of New Zealand white rabbit corneas were dissected from the limbal and central regions to obtain CEC for sphere-forming culture, in which the cells formed spheres in serum-free medium containing growth factors. The number of primary and secondary sphere colonies and the size of the primary spheres were compared between the limbal and central regions. To promote differentiation, isolated sphere colonies were plated in dishes coated with poly-L-lysine (PLL)/laminin. The expression of epithelial, neural, and mesenchymal mRNAs was examined in the sphere colonies and their progeny by immunocytochemistry and/or the reverse transcription–polymerase chain reaction (RT–PCR). Adherent differentiated cells from the sphere colonies were also examined morphologically.
Results: Primary spheres were isolated from both the limbal and central regions of the cornea. The rate of primary sphere formation by CEC from the limbal region (55.6±10.6/10,000 cells) was significantly higher than that by cells from the central cornea (43.1±7.2/10,000 cells, p=0.0028), but there was no significant difference in the size of primary spheres derived from both regions. The self-renewal capacity of cells from the limbal region was higher than that of cells from the central region, as evidenced by the significantly higher secondary sphere formation rate of limbal cells (38.7±8.5/10,000 cells) in comparison with that for central cells (31.3±5.7/10,000 cells, p=0.013). The primary sphere colonies expressed bromodeoxyuridine (BrdU), a 63-kDa protein (p63), p75 neurotrophin receptor (p75NTR), and nestin, whereas their progeny expressed cytokeratin 3, cytokeratin 12, vimentin, α-smooth muscle actin, microtubule-associated protein 2, and neuron-specific enolase on immunocytochemical analysis. These markers were confirmed by RT–PCR.
Conclusions: Our findings indicate that limbal CEC contain more progenitor cells with a stronger self-renewal capacity than cells from the central region. These progenitor cells differentiate into the epithelial lineage, and can also produce neuronal protein.
The corneal epithelium (CE) is a nonkeratinized epithelium composed of multiple layers of cells with self-renewal capacity. Corneal epithelial stem cells are thought to be localized in the basal cell layer of the limbus and are believed to correspond to transient amplifying cells and terminally differentiated cells in the central CE [1-4]. Wound healing at the central CE occurs via centripetal migration and growth of stem cells from the periphery [5-9]. Over the past few years, a considerable number of workers have undertaken a molecular and histological analysis of corneal epithelial stem cells [10-16]. Kruse and Tseng demonstrated that limbal stem cells could be differentiated into transient amplifying cells. Du et al. reconstructed rabbit corneal surface using cultured human limbal cells on amniotic membrane. Yoshida et al. demostrated neural crest-derived, multipotent stem cells exist in the adult cornea. However, little attention has been paid to attempting the selective isolation of stem cells or progenitor cells from the CE. Although corneal stem cells expressing neuronal markers were recently isolated from rodents , almost all of the differentiated cells derived from rodent corneal spheres show characteristics similar to those of fibroblast-like cells . Thus, the separation of true epithelial spheres that retain the potential to differentiate into the corneal epithelial lineage has not been achieved yet because of the difficulty in isolating corneal epithelial cells (CEC) from mouse or rat stroma without contamination by fibroblasts. Because rabbit and human corneas are far larger than those of rats or mice, these may be more suitable for the isolation and characterization of corneal epithelial stem or progenitor cells.
In this study, we achieved the first isolation of adult progenitor cells from rabbit CE in serum- and feeder layer-free culture conditions. We also investigated the difference in self-renewal capacity between CEC from the central and limbal regions and whether the isolated cells could differentiate into multiple lineages.
All animals were treated in accordance with the ARVO Statement on the Use of Animals in Ophthalmic and Vision Research and with the protocols approved by the Committee for Animal Research at the University of Tokyo Graduate School of Medicine. Rabbits were obtained from Saitama Experimental Animals Inc., Japan.
Primary cultures were established from 12-week-old male New Zealand white rabbits weighing an average of 2.4 kg. The basal medium for culture was DMEM/F12 (1:1; Sigma-Aldrich, St. Louis, MO) supplemented with 2% B27 (Invitrogen, San Diego, CA), 20 ng/ml of epidermal growth factor (EGF; Sigma-Aldrich), and 20 ng/ml of basic fibroblast growth factor (bFGF; Sigma-Aldrich), as described previously . Anesthesia was induced by intramuscular injection of ketamine hydrochloride (60 mg/kg; Sankyo, Tokyo, Japan) and xylazine hydrochloride (10 mg/kg; Bayer, Leverkusen, Germany). After disinfection and sterile draping of the orbital region, a surgical blade was used to carefully dissect small tissue pieces from the limbal region. These tissue pieces measured approximately 1 mm×2 mm and were 100 μm thick with intact epithelium. To compare the sphere formation rate between the limbal cornea and central cornea, a sample of epithelium was excised from the central 6.0-mm region of the cornea. Each tissue sample was washed three times with sterile saline, and then immersed for 5 min in saline containing 10% povidone-iodine (Meiji, Tokyo, Japan) and 50 µg/ml gentamicin (Sigma-Aldrich). After further rinsing with saline, the limbal epithelial tissues were cut into small pieces and incubated in basal medium containing 0.02% collagenase (Sigma-Aldrich) overnight at 37 °C. After washing with Ca2+- Mg2+-free phosphate-buffered saline (PBS; Sigma-Aldrich), the tissue pieces were incubated in 0.2% EDTA at 37 °C for 5 min and were dissociated into single cells by trituration with a fire-polished Pasteur pipette. After centrifugation at 800× g for 5 min, the cells were resuspended in basal medium. Isolated limbal epithelial cells were counted with a Coulter counter and their viability was confirmed to be >90% by trypan blue staining (Wako Pure Chemical Industries, Osaka, Japan).
Primary culture was done according to the neurosphere assay . Basal medium containing methylcellulose gel matrix (1.5%; Wako Pure Chemical Industries) was employed to prevent reaggregation of the cells, as described previously . Plating was done for floating culture at a density of 10.0 viable cells/μl (40,000 cells/well) in uncoated 60-mm culture dishes (BD Biosciences, San Jose, CA). Under these conditions, reaggregation did not occur and most (or all) of the sphere colonies were derived from single cells [24,25]. Culture was done in a humidified incubator with an atmosphere of 5% CO2.
After seven days, cell clusters (i.e., sphere colonies) were detected. The number of primary spheres was counted after 7 days of culture. To distinguish growing spheres from dying cell clusters, only clusters with a diameter >50 μm were counted. For passaging, primary spheres (day 7) were treated with 0.5% EDTA and dissociated into single cells, which were plated in 24-well culture plates at a density of 10.0 cells/μl. Then culture was done for a further 7 days in basal medium containing methylcellulose gel matrix. To measure the diameter of the sphere colonies, cultures were observed under an inverted phase-contrast microscope (Nikon ELWD 0.3, Tokyo, Japan) with a 10× objective lens, and the images were analyzed by employing the NIH image program developed at the USA National Institutes of Health and available on the Internet.
To assess the mulitipotentiality of isolated sphere colonies, individual primary spheres (day 7) were transferred to 13-mm glass coverslips coated with 50 μg/ml poly-L-lysine (PLL; Sigma-Aldrich) and 10 μg/ml fibronectin (BD Biosciences, Billerica, MA) in separate wells, as described previously . To promote differentiation, 1% fetal bovine serum (FBS) was added to the basal medium to form differentiation medium and culture was continued for another 7 days.
Expression of BrdU in the sphere colonies was determined by immunocytochemistry. The 7-day primary spheres were incubated overnight with 10 μM/ml BrdU (Sigma-Aldrich). After fixing in methanol (Wako Pure Chemical Industries) at 4 °C for 5 min and treatment with 2 M HCl (Wako Pure Chemical Industries) in PBS at room temperature for 60 min, the cells from sphere colonies were stained with FITC-conjugated anti-BrdU antibody at room temperature for 60 min in the dark. After washing with PBS, fluorescence was observed under a fluorescence microscope (model BH2-RFL-T3 and BX50; Olympus, Tokyo, Japan).
Immunocytochemical analysis was performed on 7-day spheres and their progeny after 7 days of adherent culture on glass coverslips. Cells were fixed with 4% paraformaldehyde (Wako Pure Chemical Industries) in PBS for 10 min. After washing in PBS, the cells were incubated for 30 min with 4% BSA (BSA; Sigma-Aldrich) in PBS containing 0.3% Triton X-100 (BSA/PBST; Rohm & Haas, Philadelphia, PA) to block nonspecific binding. Then the cells were incubated for 2 h at room temperature with specific primary antibodies diluted in BSA/PBST. The following antibodies were used: mouse anti-cytokeratin 3 monoclonal antibody (1:2,000; AE-5; Progen Biotechnik GMBH, Heidelberg, Germany), goat anti-cytokeratin 12 polyclonal antibody (1:2,000; L-20; Santa Cruz Biotech, Santa Cruz, CA), mouse monoclonal anti-p63 antibody (1:400; Imgenex, San Diego, CA), mouse monoclonal anti-nestin antibody (1:400; BD Biosciences), mouse monoclonal anti-nerve growth factor (NGF) receptor p75NTR antibody (1:400; DAKO, Kyoto, Japan), Cy3-conjugated mouse anti-α-smooth muscle actin (SMA) mAb (1:400; Sigma-Aldrich), mouse monoclonal anti-nestin antibody (1:400; BD Biosciences), mouse monoclonal anti- microtubule-associated protein 2 antibody (MAP2, 1:400; Chemicon, Temecula, CA), mouse monoclonal anti- neuron specific enolase antibody (NSE, 1:400; DAKO), and mouse monoclonal anti-BrdU/fluorescein antibody (1:100; Roche Diagnostics, Basel, Switzerland). As a negative control, mouse IgG (1:1,000, Sigma-Aldrich) was used instead of the primary antibody. After incubation with these primary antibodies, cells were incubated for 30 min with fluorescence-labeled anti-mouse IgG or anti-goat IgG (Alexa Fluor, 1:2,000; Molecular Probes, Eugene, OR) as the secondary antibody. After washing with PBS, fluorescence was observed under a fluorescence microscope.
Total RNA was isolated from primary sphere colonies, the adherent progeny of sphere colonies, and rabbit corneal epithelial cells before culture with a kit (Isogen; Nippon Gene, Tokyo, Japan) according to the manufacturer’s instructions, after which RT–PCR was done to investigate the expression of nestin, keratin-3, and glyceraldehyde-3-phosphate dehydrogenase (G3PDH) as a housekeeping gene. Then the isolated total RNA was treated with RNase-free DNase I (Stratagene, La Jolla, CA) for 30 min, and cDNA was formed by using Super Script II (Invitrogen) as the reverse transcriptase. T12VN primer (at a concentration of 25 ng/µl) was used to make the 1st strand. As the negative control, RT–PCR was performed in the absence of reverse transcriptase. The PCR buffer contained 1.5 mM MgCl2 with 0.2 mM of each dNTP (Applied Biosystems, Foster City, CA), 0.2 mM of each primer, and 25 units/l of Amp Taq Gold (Applied Biosystems). After an initial 9 min of denaturation at 95 °C, amplification was performed for 30 cycles (30 s at 94 °C, 30 s at 60 °C, and 45 s at 72 °C), followed by a final 7 min of elongation using a thermal cycler (I-Cycler; Bio Lad Laboratories, Hercules, CA). The oligonucleotide primers for RT–PCR were based on the sequences of p63, nestin, ketarin-3, keratin-12, and G3PDH. The primer pairs and produce size are shown in Table 1. Products were separated on 1% agarose gel and then visualized by staining with ethidium bromide.
Student’s unpaired t-test was used to compare mean values. All analyses were done with the Stat View statistical software package (Abacus Concepts, Berkeley, CA) and p<0.05 was considered to indicate significance.
We adapted the neurosphere-forming assay that was originally devised to enrich neural stem cells and other progenitors [20,21,24,25,27-32] for isolation of adult stem cells from rabbit corneal limbal and central epithelium (Figure 1A). CEC were disaggregated into single cells and plated in uncoated wells with basal medium containing methylcellulose gel matrix to prevent reaggregation at a density of 10 viable cells/μl, as described elsewhere [24,25]. Under these conditions, sphere colonies are derived from proliferation and are not formed by reaggregation of dissociated cells [24,25]. Almost complete disaggregation into single cells was confirmed by counting the percentage of single cells, double cells, and triple cells, which demonstrated that more than 99% of the cells were single. Primary spheres were isolated from CEC derived from both the limbal and central regions. Photographs of representative spheres obtained from the limbal and central regions are shown in Figure 1B,C. When the number of sphere colonies obtained from the CEC was counted, there was a significantly greater number of spheres (55.6±10.6, mean±standard deviation) obtained from the limbal region than from the central region (43.1±7.2) per 10,000 plated cells (Figure 2A). There were no statistically significant differences in the size of primary spheres from the two regions after 3, 5, and 7 days (Figure 2B).
To evaluate the self-renewal capacity of CEC, primary spheres were passaged under the same culture conditions as CEC. Secondary spheres were generated from dissociated primary spheres obtained from the limbal or central CEC. Photographs of representative secondary spheres are shown in Figure 3A. The number of secondary spheres per 10,000 cells was significantly higher for primary spheres from the limbal region than from the central region (38.7±8.5 versus 31.3±5.7, respectively, p=0.013, unpaired t-test; Figure 3B).
Nuclear protein p63 was recently proposed as a progenitor cell marker that can be used to identify epidermal stem cells . Nestin is expressed by immature neural progenitor cells in multipotential sphere colonies derived from the brain , skin , inner ear , retina , corneal stroma , and endothelium [28-32]. To examine the potential of sphere colonies, primary spheres were immunostained for p63 and nestin as stem/progenitor cell markers. Additionally, p75 neurotrophin receptor (p75NTR), the nerve growth factor receptor, was used as a marker of epidermal basal progenitor cells . Most cells in the spheres were immunoreactive for p63, p75NTR, nestin, and BrDU (Figure 4A,B). The expression of p63 and nestin was also confirmed by the detection of mRNA (Figure 4C). Spheres derived from both the limbal and central regions showed the same patterns of immunostaining and mRNA expression.
To investigate whether the progeny of the spheres possessed the characteristics of mesenchymal or neural lineage cells, single spheres (day 10 of culture) were transferred onto poly L lysine/fibronectin-coated glass coverslips in 24-well plates, and then were cultured in differentiation medium containing 1% fetal bovine serum (FBS). After 7 days, many cells migrated out from the spheres. A small population of these migrating cells was p63- and nestin-positive, while most of the cells were positive for cytokeratin 3, a differentiated epithelial cell marker. The cells were also positive for MAP2 and NSE, indicating that the cells showed neuronal differentiation under these conditions (Figure 4D). Thus, culture of CEC generated p63- and nestin-positive progenitor cells that gave rise to epidermal and neuronal cells, suggesting the bipotency of these progenitor cells.
Considerable attention has been directed toward understanding the role of stem cells or progenitor cells in corneal development. Kruse and Tseng demonstrated the differentiation of isolated corneal stem cells to transient amplifying cells. Yoshida et al. [19,21] isolated corneal precursors from the mouse cornea by sphere-forming assay. They found that the phenotype of mouse keratocytes can be maintained in vitro for more than 12 passages by the serum-free sphere culturing technique and that neural crest-derived, multipotent stem cells exist in the adult cornea . However, there have been few reports about the harvesting of stem cells or progenitor cells from the cornea by sphere-forming culture, and the only research concerning the isolation of tissue-specific stem cells from the corneal limbal epithelium by this method has been performed in rodents . In our previous study, CEC isolated from the human corneal limbus did not form spheres in floating culture and became adherent to uncoated culture dishes . This suggests that rabbit CEC may be fundamentally different from human cells. In addition, we have previously reported on the isolation of corneal progenitor cells from the corneal stroma and endothelium by sphere-forming culture [28-32]. We found that the cells isolated from adult rabbit corneal epithelium were directed toward the ectodermal lineages and expressed markers for epidermal, and neural cells. To our knowledge, this is the first report about differentiation of progenitor cells isolated from the limbal region or the central cornea into epidermal and neural cells by floating culture.
In this study, we isolated progenitor cells from rabbit CEC by the sphere-forming culture method established by Toma et al. . Spheres derived from rabbit CEC had a strong proliferative capacity as shown by BrdU staining and were positive for the epidermal progenitor cell marker p63, the epidermal basal progenitor cell marker p75NTR, and the neuron stem cell marker nestin. Self-renewing potential was indicated by the ability of the progeny of individual spheres to generate secondary spheres. Regarding the self-renewal capacity of these cells, we could not exclude the possibility that these progenitor cells did not have a uniform renewal capacity, with all of the growth coming from a rare subset of cells in the spheres. Additional studies will be required to determine whether or not these progenitor cells are adult stem cells.
It is also noteworthy that the spheres were positive for CEC markers (cytokeratins 3 and 12), and a neuronal marker (MAP2), indicating that epithelial and neuronal differentiation occurred in the spheres under serum-free culture conditions. Moreover, the progeny of the spheres also expressed epithelial and neuronal markers. The neurotrophin receptor p75NTR is most highly expressed in the progenitor cells of the corneal limbal epithelium . Grueterich et al. reported that p75NTR was localized in the suprabasal limbal epithelium and entire corneal epithelium, but was not detected in the limbal basal epithelium, suggesting that p75NTR can be considered a differentiated progenitor marker of corneal epithelium. These findings indicate that the spheres contained lineage-uncommitted bipotent progenitor cells.
Various studies have already been conducted on corneal epithelial stem cells by diverse methods [10-16,39]. A common limitation of studies on these cells has been the stem cell isolation procedure. Zhao et al. offered a model for characterizing the neural potential of corneal stem cells by the sphere-forming assay. However, they failed to show that CEC stem cells can differentiate into the authentic epithelial lineage. In this study, we proposed the first successful isolation procedure for progenitor cells that mainly generated CECs positive for cytokeratins 3 and 12. Thus, precursors obtained from the corneal epithelium may be more appropriate than multipotential stem cells for tissue regeneration or cell transplantation, because such precursors should efficiently differentiate to produce their tissue of origin.
We demonstrated that the number of spheres derived from the limbal epithelium was significantly higher than that obtained from the central corneal epithelium, while there was no significant difference in the size of spheres from the limbal and central areas. The expression of differentiation markers such as a specific corneal marker (cytokeratin 3) was markedly higher in spheres derived from the central cornea compared with spheres from the limbal cornea (data not shown), indicating that cells composing spheres derived from the central corneal epithelium were more prone to undergo differentiation. These findings imply that limbal epithelium contains more stem or progenitor cells than the central epithelium and that spheres derived from limbal CEC proliferate while maintaining a more immature state in comparison with CEC from the central cornea. Conversely, even the transient amplifying cells or progenitor cells of the rabbit central cornea may have a finite proliferative potential and may differentiate into the epithelial and neuronal lineages. The present findings obtained by sphere-forming culture are consistent with the concept that the corneal limbus is rich in stem cells [7-9].
In summary, we isolated adult progenitor cells from rabbit corneal epithelium by sphere-forming culture and compared the relative abundance and self-renewal capacity of CEC progenitor cells obtained form the central and limbal regions of the cornea. Our findings demonstrated that both limbal and central CEC contain a significant number of progenitor cells, although the limbal region of the rabbit cornea has a higher density of progenitor cells with a stronger self-renewal capacity than the central region. Future our study will focus on the transplantation of autologous isolated sphere-forming progenitors. Proliferating sphere-forming cells derived from CEC may be useful for the treatment of corneal diseases associated with limbal stem cell deficiency. | http://www.molvis.org/molvis/v16/a185/ |
You have 0 games on move.
You have 0 invitations to game.
2 replies. Last post: 2019-12-12Reply to this topic Return to forum
Keil is a Go-like game for two players, Black and White, played on the spaces _ of an initially empty hexhex board (or, equivalently, on the intersections of a hexagonal grid of triangles). It preserves crosscuts and ko thanks to the idea of coupling cells, which reduces the natural connectivity of the board. Otherwise, the rules are the same as in Go. In particular, the concept of domain performs the same functions as the concepts of group, liberty and territory in Go.
Definitions
Two adjacent cells are said to be _coupled if there is a third cell adjacent to both such that the three cells together contain 0 to 3 stones of one color only.
A black domain is a set of mutually coupled cells containing one or more black stones, one or more empty cells and no white stones. Likewise, with colors reversed, for white domains.
Play
Black plays first, then turns alternate. On your turn, you must pass or place a stone of your color on an empty cell. After a placement, remove all enemy stones which don’t belong to any domains. After all removals, the stone you placed must be part of at least one domain, and the current board position must be different from the board positions at the end of all your previous turns. Otherwise, your placement was illegal.
The game ends when both players pass in succession. The player with the higher score in the final position wins. A player’s score is the number of cells belonging to their own domains only, plus a komi in the case of White.
Finished game. Black won by 0.5, with 67 points to White’s 66.5. Komi was 6.5 points. | https://littlegolem.net/jsp/forum/topic2.jsp?forum=20&topic=449 |
A new US study has offered insight as to why night shift workers are at an increased risk of developing certain types of cancer.
Conducted at Washington State University, and published in the Journal of Pineal Research, the study involved a controlled laboratory experiment that used healthy volunteers who were on simulated night shift or day shift schedules.
The researchers found that night shifts seem to disrupt the natural 24-hour rhythms in the activity of certain cancer-related genes.
This makes night shift workers more vulnerable to damage to their DNA while at the same time causing the body’s DNA repair mechanisms to be mistimed to deal with that damage.
“There has been mounting evidence that cancer is more prevalent in night shift workers, which led the World Health Organization’s International Agency for Research on Cancer to classify night shift work as a probable carcinogenic,” said co-corresponding author Shobhan Gaddameedhi, an associate professor at North Carolina State University.
“However, it has been unclear why night shift work elevates cancer risk, which our study sought to address.”
The study involved a simulated shift work experiment involving 14 young adults – half of which followed a night shift schedule for three days, and the other half a day shift schedule.
Analyses of white blood cells taken from blood samples showed that the rhythms of many of the cancer-related genes were different in the night shift condition compared to the day shift condition.
Notably, genes related to DNA repair that showed distinct rhythms in the day shift condition lost their rhythmicity in the night shift condition.
What’s more, after the researchers exposed isolated white blood cells to ionizing radiation at two different times of day, cells that were radiated in the evening showed increased DNA damage in the night shift condition as compared to the day shift condition.
This meant that white blood cells from night shift participants were more vulnerable to external damage from radiation, a known risk factor for DNA damage and cancer.
“Nightshift workers face considerable health disparities, ranging from increased risks of metabolic and cardiovascular disease to mental health disorders and cancer,” said co-senior author Hans Van Dongen, a professor, and director of the WSU Sleep and Performance Research Center.
“It is high time that we find diagnosis and treatment solutions for this underserved group of essential workers so that the medical community can address their unique health challenges.”
The researchers say the next step is to conduct the same experiment with real-world shift workers to determine whether the DNA damage builds up over time – increasing the risk further.
The work is expected to eventually be used to develop prevention strategies and drugs that could address the mistiming of DNA repair processes.
myosh Critical Control Management (CCM)
A Configuration Case Study with Mitchell Services
Discover how Smart Inspections™ and the Rules Engine are used to manage Critical Control Effectiveness, status, and reporting. | https://myosh.com/blog/2021/06/17/why-does-night-shift-work-increase-cancer-risk/ |
In the urine of the child leukocytes
Immediately after being born and throughout life, a person undergoes a procedure for testing. This method of diagnosis is considered the fastest and most effective. Both adults and children are tested for urine, blood and feces. All of these substances contain white blood cells - microorganisms that are produced by the bone marrow. They perform an important protective function, especially in the children's body. Of course, many parents worry when they hear that a child has leukocytes in their urine.
In this article, we will look at why leukocytes appear in children's urine and what diseases this may indicate.
The role of leukocytes in the children's body
As already mentioned, leukocytes have a protective function. In the presence of inflammatory processes or the development of a serious pathology, an excessive concentration of white blood cells can be detected in the urine. These blood cells are necessary for the normal functioning of the immune system and the fight against pathogens and bacteria.
Moreover, leukocytes protect the children's organism not only from internal pathogens, but also from pathogenic external agents.
The amount of white blood cells in the urine can determine the state of the protective forces of the child's body.
How to pass a urine test for white blood cell count?
Parents should be aware that failure to follow the rules of the analysis can provide an inaccurate picture. When exposed to high temperature (for example, when taking a hot bath) or as a result of performing physical exercises on the eve of taking comparative samples, the level of white blood cells rises sharply, which can enter the attending specialist astray.
Every mother should know that the urine analysis of the child should be taken according to certain rules:
- The child should be thoroughly cleaned without the use of hygiene products.
- Urine can be done using a special receiver, for this it needs to be fixed around the genital external organs and wait for urination.
- The child can be put on a diaper or a special oilcloth and wait for urination. In the process of collecting urine you need to use a special jar.
- To stimulate urination, young children under the age of 1 year can be massaged (perform stroking movements along the spine line), and older ones can turn on the water, as a rule, its noise affects urination.
- If you took all the urine for analysis, then the attending specialist should be warned about this.
- Analyzes must be taken in a special laboratory.
What is the rate of white blood cells in the urine of a child?
After testing, all parents are eagerly awaiting results and are very worried. The content of leukocytes in the children's urine is determined by examining the material under a microscope. Their number is indicated on the basis of the studied (visible) part of the urine.
Normally, tests should show the following white blood cell count:
- in the girl's urine - no more than 10 units;
- in the boy's urine - no more than 7 units.
Normal indicators are also considered to bring the level of leukocytes in children's urine to zero. In case of exceeding the norm, there are suspicions about the development of inflammatory or infectious processes in the children's body, and the child needs to undergo a full-scale examination.
What are the evidence of increased white blood cells?
If the amount of leukocytes in the urine of a child is increased, this may indicate the development of various inflammatory and infectious diseases. These include:
- pyelonephritis is an infectious disease of the kidneys, and in children the inflammatory process begins to occur in the bladder;
- inflammation of the mucous membranes of the genital organs;
- cystitis (most often girls are susceptible to this disease);
- infectious diseases of the urinary system;
- the presence of stones (as a rule, children may appear in the kidneys or bladder);
- manifestations of an allergic reaction;
- non-compliance with the rules of hygiene;
- intertrigo.
The main symptoms that indicate an increased concentration of leukocytes in the children's urine
Medical practice is known for thousands of cases of leukocyturia, that is, increasing the number of leukocytes in the children's urine. Many parents are wondering how to recognize this condition and when to use the help of the attending specialist.
If a child has a lot of white blood cells in the urine, then this may be accompanied by the following symptoms:
- difficulty urinating (the child may experience cramps, pain);
- frequent urge to urinate;
- uncharacteristic dark color;
- the presence of precipitation or impurities in the urine;
- urine turbidity;
- unpleasant (sometimes harsh) smell;
- possible temperature rise;
- periodic chill sensation.
Babies can not tell you about these symptoms, but some of the above phenomena parents can quite notice themselves, for example, uncharacteristic color and smell. In this case, the urine should immediately be taken for analysis to a medical institution.
How to cure leukocyturia in a child?
The treatment course is appointed exclusively by a specialist only after a full-scale examination of the child and the identification of the reasons for the increased level of leukocytes in the children's urine. As a rule, doctors prescribe antibiotic pharmacological drugs. After several days of admission, the child must re-take the analysis.
It is highly undesirable to give antibiotics to infants. In this case, the attending specialists are looking for alternative ways of treatment and suggest that parents give the child anti-inflammatory drugs and use traditional medicine.
It is extremely rare to find cases when an increase in leukocytes in children's urine is directly connected with the congenital pathology of the urinary system — urine stagnation in the canals. This disease is almost asymptomatic, and it can be detected only after an ultrasound diagnosis. When confirming the diagnosis, the doctor prescribes a comprehensive treatment.
Leukocytes play a very important role in both children and adults. If you experience the slightest symptoms, you should certainly pass a urine test. Remember: the faster doctors diagnose the presence of a pathology or the development of a disease, the sooner treatment will begin and recovery will occur. | https://womeninahomeoffice.com/health/in-the-urine-of-the-child-leukocytes.html |
Short Title:
Linking Predator Behavior and Resource Distributions
Start Date:
2019-09-01
End Date:
2023-08-31
Description/Abstract
This research project will use specially designed autonomous underwater vehicles (AUVs) to investigate interactions between Adelie and Gentoo penguins (the predators) and their primary food source, Antarctic krill (prey). While it has long been known that penguins feed on krill, details about how they search for food and target individual prey items is less well understood. Krill aggregate in large swarms, and the size or the depth of these swarms may influence the feeding behavior of penguins. Similarly, penguin feeding behaviors may differ based on characteristics of the environment, krill swarms, and the presence of other prey and predator species. This project will use specialized smart AUVs to simultaneously collect high-resolution observations of penguins, their prey, and environmental conditions. Data will shed light on strategies used by penguins prove foraging success during the critical summer chick-rearing period. This will improve predictions of how penguin populations may respond to changing environmental conditions in the rapidly warming Western Antarctic Peninsula region. Greater understanding of how individual behaviors shape food web structure can also inform conservation and management efforts in other marine ecosystems. This project has a robust public education and outreach plan linked with the Birch and Monterey Bay Aquariums.
Previous studies have shown that sub-mesoscale variability (1-10 km) in Antarctic krill densities and structure impact the foraging behavior of air-breathing predators. However, there is little understanding of how krill aggregation characteristics are linked to abundance on fine spatial scales, how these patterns are influenced by the habitat, or how prey characteristics influences the foraging behavior of predators. These data gaps remain because it is extremely challenging to collect detailed data on predators and prey simultaneously at the scale of an individual krill patch and single foraging event. Building on previously successful efforts, this project will integrate echosounders into autonomous underwater vehicles (AUVs), so that oceanographic variables and multi-frequency acoustic scattering from both prey and penguins can be collected simultaneously. This will allow for quantification of the environment at the scale of individual foraging events made by penguins during the critical 50+ day chick-rearing period. Work will be centered near Palmer Station, where long-term studies have provided significant insight into predator and prey population trends. The new data to be collected by this project will test hypotheses about how penguin prey selection and foraging behaviors are influenced by physical and biological features of their ocean habitat at extremely fine scale. By addressing the dynamic relationship between individual penguins, their prey, and habitat at the scale of individual foraging events, this study will begin to reveal the important processes regulating resource availability and identify what makes this region a profitable foraging habitat and breeding location.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Personnel
Funding
AMD - DIF Record(s)
Data Management Plan
None in the Database
Product Level: | https://www.usap-dc.org/view/project/p0010347 |
Natural disturbance is a key determinant of ecosystem structure and function. Disturbances can create novel resource patches and modify habitat structure, thereby inducing spatial heterogeneity in the trade-off between food acquisition and predator avoidance by prey. We evaluated how canopy gap dynamics in eastern Canadian old-growth boreal forest alter the spatial distribution of food and cover for snowshoe hares (Lepus americanus) and how hares responded to these spatial patterns. We 1st compared browse availability within canopy gaps and the surrounding forest. We then examined fine-scale habitat selection, movement patterns, and foraging decisions by hares during winter. Perception of risk within canopy gaps was assessed using foraging experiments. We found that browse availability was 4 times higher within gaps than under forest cover. Although hares acquired most of their browse from gaps, their use of space during winter was influenced by a greater perception of predation risk within gaps. Hares selectively used areas of higher canopy closure suggesting that they restricted their use of gaps to foraging activities. Furthermore, hares biased their movements away from gaps or increased their speed of travel in areas of relatively low cover. Hares consumed experimental browse stems more intensively under forest cover than in canopy gaps, indicating a trade-off between food and safety. When foraging within canopy gaps, hares also were less likely to use both experimental and natural food patches located far away from cover. Our study demonstrates how gap dynamics in old-growth stands can structure the fine-scale spatial organization of a key prey species of the boreal forest by creating spatial heterogeneity in their landscapes of fear and food. Spatial variation in browse use in response to predation risk may in turn influence patterns of sapling growth and survival within canopy gaps. Gap dynamics therefore may be a fundamental process structuring predator–prey interactions in old-growth boreal forests.
Natural disturbances that vary in size, severity, and frequency play a fundamental role in structuring aquatic and terrestrial ecosystems by creating heterogeneity at multiple spatial and temporal scales (Pickett and White 1985; Sousa 1984). Habitat disturbance can affect animal distribution by altering the composition and structure of vegetation that provide food and cover, and many animals benefit from disturbances that create productive conditions associated with areas undergoing regeneration (Sousa 1984). Although infrequent broadscale disturbances such as forest fires and tropical storms can influence patterns of species occurrence at the landscape scale (Fisher and Wilkinson 2005; Willig et al. 2007), frequent microhabitat disturbances such as tree-fall gaps, blowouts, and wave action create fine-scale heterogeneity that also plays an important role in determining species distribution (Bouget and Duelli 2004; Cramer and Willig 2005; Paine and Levin 1981).
Habitat heterogeneity can have a profound influence on trophic interactions. For example, heterogeneity can promote the persistence of predator–prey populations by reducing predator foraging efficiency, by creating spatial refuges for prey, or by creating locally asynchronous population dynamics (Hastings 1977; Holt and Hassell 1993; Huffaker 1958). Recent investigations have shown that the functional response of both herbivores and carnivores to food availability can depend on the spatial distribution of these resources (Hobbs et al. 2003; Pitt and Ritchie 2002). Resource heterogeneity therefore can influence the functional link among trophic levels. For herbivores, variation in the spatial arrangement of plants can affect the rate at which they encounter food patches, thereby influencing their rate of energy intake and dietary choice (Fortin et al. 2002; Hobbs et al. 2003). To increase their intake rate in heterogeneous environments herbivores should concentrate on aggregations of food patches to reduce travel time between patches (Nonaka and Holme 2007), but the most profitable food patches often are also the most risky (Brown and Kotler 2004).
Fear of predation is a major force influencing movement and foraging decisions of prey (Lima and Dill 1990), and disturbances that increase food resources also can remove habitat structure that provides protection against predators. Given that predators may be more efficient at detecting and capturing prey in certain habitats (Rohner and Krebs 1996), prey often rely on habitat structure as a cue for risk (Brown and Kotler 2004). For example, they may trade off food for safety by foraging less intensively in open habitats or with increasing distance from protective cover (Hochman and Kotler 2007). During locomotion prey also may attempt to mitigate risk by moving in areas of greater cover (Fortin et al. 2005; Lagos et al. 1995), or by adjusting their speed to quickly traverse areas where they would be more conspicuous to predators (Vasquez et al. 2002). Slight variations in habitat structure can result in relatively large changes in the perception of risk (van der Merwe and Brown 2008). Therefore, microhabitat disturbances should shape prey distribution by continually changing the landscapes of food and fear (Laundré et al. 2001) around which prey species structure their home ranges.
Canopy gap dynamics in old-growth forests provide an interesting system in which to evaluate how fine-scale disturbances influence the distribution of resources, prey, and their interaction in the presence of predation risk. Old-growth boreal forests are characterized by high structural heterogeneity due to fine-scale canopy disturbances such as windthrow, insect outbreaks, disease, and tree senescence (McCarthy 2001). Because canopy closure in mature boreal forest generally limits the availability of food resources for browsing herbivores (Fisher and Wilkinson 2005), the establishment of early successional plants and the release of advanced regeneration within canopy gaps could create resource-rich patches within a matrix of low food availability. Gap disturbances also decrease the cover on which such herbivores rely for protection from predators. Predation risk should influence how far and intensively herbivores are willing to forage within canopy gaps. Foraging and movement behaviors of herbivores can reveal how balancing food acquisition and predator avoidance lead to their spatial distribution in forests structured by gap dynamics. | https://complete.bioone.org/journals/journal-of-mammalogy/volume-91/issue-3/09-MAMM-A-289.1/Fine-scale-disturbances-shape-space-use-patterns-of-a-boreal/10.1644/09-MAMM-A-289.1.full |
Novel stimuli are ubiquitous. Few studies have examined mixed-species group reactions to novelty, although the complex social relationships that exist can affect species’ behavior. Additionally, studies rarely consider possible changes in communication. However, for social species, changes in communication, including rates, latencies, or note-types within a call, could potentially be correlated with behavioral traits. As such, this research aimed to address whether vocal behavior is correlated with mixed-species’ reactions to novel objects. I first tested the effect of various novel stimuli on the foraging and calling behavior of Carolina chickadees, Poecile carolinensis, and tufted titmice, Baeolophus bicolor. Chickadees and titmice both had longer latencies to forage in the presence of novel stimuli. Chickadees also modified their vocal behavior, having shorter latencies to call and using more ‘D’ notes in their calls in the presence of novel stimuli compared to titmice. Chickadees and titmice reacted to the novel stimuli similarly to how I would expect them to react to a predator. Therefore, a second experiment was conducted directly comparing chickadee and titmouse reactions to a novel (Mega Bloks®) stimulus and a predator (Cooper’s hawk) stimulus. Chickadees and titmice had an intermediate latency to forage in the presence of a novel stimulus compared to control and predator contexts. Again, chickadees had shorter calling latencies across contexts compared to titmice. As a final experiment, using semi-naturalistic aviaries, I tested whether chickadee flock size and the presence or absence of titmice influenced reactions to novel and predator stimuli. Chickadees called more in smaller chickadee flocks compared to larger chickadee flocks, and also when titmice were absent compared to when they were present. These results were stronger in predator contexts compared to novel contexts. This suggests that conspecific flock size influences calling behavior, such that smaller flocks, which may experience higher stress levels and may be required to exhibit more anti-predatory behavior, call more than larger flocks. Taken together, this work has important implications for the complexity of social relationships in mixed-species groups, the social roles species play within the group, and how group size influences vocal behavior and reactions to various degrees of threat.
Recommended Citation
Browning, Sheri Ann, "Mixed-species Flock Members’ Reactions to Novel and Predator Stimuli. " PhD diss., University of Tennessee, 2015. | https://trace.tennessee.edu/utk_graddiss/3327/ |
(i.e., searching time).
These alterations improve______________ by making it difficult for the predator to find or consume the prey.
Example:
Euplotes ciliates sense cues from Lembadion predators. Euplotes respond by growing larger. This is energetically costly.
Lembadion can also respond by growing larger, but they are poorly suited to eat small prey.
prey fitness
Changes in morphology__________ and life history________ are often relatively slow responses.
(e.g., body shape)
(e.g., time to sexual maturity)
The most rapid phenotypic responses are typically
behavioral traits.
If spatial variation is not common,
a single phenotype will be favored.
Many times plasticity increases
individual fitness
A phenotype
that is well-suited to one environment may be poorly suited to other environments.
Environment
can turn certain genes on or off, which causes different phenotypes to develop.
The extent of the space affected by an event is usually related to an event's
duration in time.
Example:
The spatial dimensions of atmospheric and marine phenomena are related to their duration.
Variations in topography and geology are generated at a slower pace than aquatic and atmospheric variations.
Events can be rare, but have large effects
Some variation occurs in regular intervals
(e.g., tsunamis).
(e.g., forest fires).
In general, the more extreme events occur less frequently.
Weather:
the variation in temperature and precipitation over periods of hours or days.
Some temporal variation in the environment is _________some is_________
predictable (e.g., alternation of day and night);
unpredictable (e.g., weather).
Climate:
the typical atmospheric conditions that occur through the year, measured over many years.
Large-scale spatial variation VS.
small spatial scale
Large - climate, land topography, and soil type
Small - plant structure and animal behavior
All phenotypes result from
genes interacting with environments.
the ability of a single genotype to produce multiple phenotypes.
Phenotypic plasticity.
allows organisms to achieve homeostasis if environmental conditions vary
Many types of traits are plastic, such as
behavior, growth, development, and reproduction.
When environmental variation results in phenotypic trade-offs, natural selection will favor the evolution of
phenotypic plasticity.
Example:
Gray tree frog tadpoles produce a phenotype that allows fast escape when predators are present and fast growth when predators are absent.
For an organism to alter its phenotype in an adaptive way, it must first be able to sense its
environmental conditions.
The best cues are those that offer the most reliable information about the environment.
Example:
How does an organism sense the level of food in its habitat?
Detecting the presence of competitors may offer a poor cue because number of competitors may not matter if resources are abundant.
A better cue may be the amount of food an individual can acquire each day.
Resource availability may determine
phenotypic response.
Many species alter their ________________ ______________ ______________in response to the presence of predators.
growth, body shape, and behavior
Plants have the ability to respond to the presence of
Example:
When Virginia pepperweed is eaten by herbivores, the plant develops leaf hairs (i.e., trichomes) and gluocosinolate compounds that make the leaves difficult to consume. Induced leaves attract fewer herbivores.
herbivores.
Hermaphrodites:
Example:
The hermaphroditic common pond snail delays egg-laying if mates are unavailable. Self-fertilizing snails lay fewer eggs.
individuals that produce both male and female gametes; individuals are able to fertilize their eggs with their own sperm (i.e., they are self-compatible).
Many organisms can adjust their_____________ to maintain activity across different environmental temperatures.
Example:
Isozymes in goldfish allow cold-acclimated fish to swim fast at low temperatures and warm-acclimated fish to swim fast at high temperatures. Fish swim poorly at temperatures to which they are not acclimated.
physiology
Many animals respond to temperature by moving to________________
Example:
The desert iguana regulates its body temperature by basking on rocks, seeking shade, or burrowing in the ground.
microhabitats.
Microhabitats: locations within a habitat that differ in environmental conditions from the rest of the habitats.
Dormancy:.
a condition in which organisms dramatically reduce their metabolic processes
Four types of dormancy are:
Example:
Insects facing drought conditions enter diapause by dehydrating themselves. Some form an impermeable outer layer to prevent further dessication.
Example:
During winter, chipmunks slow breathing and heart rates and reduce body temperature to close to 0°C.
Example:
The West Indian hummingbird loses much of the heat it generates to cold temperatures. To save energy, the bird enters torpor when it is resting at night.
Aestivation: the shutting down of metabolic processes during the summer in response to hot or dry conditions. Well-known examples include snails, desert tortoises, and crocodiles.
Some terrestrial animals survive cold weather on land by producing______________ that control the formation of ice crystals.
antifreeze chemicals
Many______________ species can freeze solid underground in a state that requires little metabolic activity.
amphibian
Foraging
is a plastic behavior because different feeding strategies represent different behavioral phenotypes.
Central place foraging:
foraging behavior in which acquired food is brought to a central place (e.g., a nest with young birds).
Risk-sensitive foraging:
Example:
Creek chub feed on tubifex worms, but locations with worms also contain more predators.
Research has found that past a certain threshold of resource abundance, creek chub will risk feeding in an area with predators.
Below that threshold, creek chub avoid areas with predators.
foraging behavior that is influence by the presence of predators.
Correlation:
a statistical description of how one variable changes in relation to another variable.
Handling time:
the amount of time that a predator takes to consume a captured prey.
YOU MIGHT ALSO LIKE... | https://quizlet.com/272473693/ch-4-ecology-flash-cards/ |
I'm interested in predator biology and nutritional ecology, both separately and combined. My lab has several current research areas. First, we are using nutritional ecology to study the mechanisms through which predators influence food webs and ecosystems. The goal of this work is to understand the dietary requirements of predators, how those requirements influence their foraging behavior, and then the consequences of this foraging behavior for other members of communities (e.g., prey, plants, nutrient cycling). A second research area in the lab is the study of how nutrition influences the behavior and life history of carnivores, including growth, aging, sexual selection and foraging. These studies of nutrition use a quantitative diet framework developed by my colleagues Prof. Steve Simpson and Prof. David Raubenheimer. In addition to quantifying the diet requirements of carnivores and how they interact with nutrient availability in the field, my lab is interested in the evolution of diet and how dietary requirements differ between carnivores and herbivores. Finally, research in the lab also examines applied questions related to nutrition and predators including: how diet influences invasion success of ants, how urbanization affects spiders, and how studies of diet regulation can be used to improve diets of endangered species in captivity.
Selected Publications
- Barnes, C.L., D. Hawlena, M. McCue and S.M. Wilder. 2019. Consequences of prey exoskeleton content for predator feeding and digestion. Oecologia 190: 1-9.
- Wilder, S.M., C.L. Barnes and D. Hawlena. 2019. Predicting predator nutrient intake from prey body contents. Frontiers in Ecology and Evolution 7: 42.
- Wilder, S.M. and P.D. Jeyasingh. 2016. Merging elemental and macronutrient approaches for a comprehensive study of energy and nutrient flows. Journal of Animal Ecology 85: 1427-1430.
- Wilder, S.M., D. Raubenheimer and S.J. Simpson. 2016. Moving beyond body condition indices as an estimate of fitness in ecological and evolutionary studies. Functional Ecology 30: 108-115.
- Simpson, S.J., F. Clissold, M. Lihoreau, F. Ponton, S.M. Wilder and D. Raubenheimer. 2015. Recent advances in the integrative nutrition of arthropods. Annual Review of Entomology 60: 293-311.
- Wilder, S.M., M. Norris, R.W. Lee, D. Raubenheimer and S.J. Simpson. 2013. Arthropod food webs become increasingly lipid-limited at higher trophic levels. Ecology Letters 16: 895-902.
- Wilder, S. M., D. A. Holway, A. V. Suarez, E. G. LeBrun and M. D. Eubanks. 2011. Intercontinental differences in resource use reveal the importance of mutualisms for fire ant invasions. Proceedings of the National Academy of Sciences USA 108: 20639-20644. | https://integrativebiology.okstate.edu/people/faculty/325-shawn-wilder |
Shallow areas of drawdown reservoirs are often devoid of adequate fish habitat due to degradation associated with unnatural and relatively invariable cycles of exposure and flooding. One method of enhancing fish habitat in these areas is to sow exposed shorelines with agricultural plants to provide structure once flooded. It remains unclear if some plants may be more suitable than others to provide effective fish habitat. To determine the fish habitat potential of various crops, we performed a replicated tank experiment evaluating the selection of agricultural plants by prey and predator fishes with and without the presence of the other. We submerged diverse treatments of potted plants in outdoor mesocosms stocked with prey and/or predator fish and monitored selection of plant species, stem density, and stem height over 0.5-h trials. Prey fish selected the densest vegetation, and selection was accentuated when a predator was present. Predators selected the second highest stem density and were more active when prey were present. Prey schooling was increased by predation risk, suggesting that cover was insufficient to outweigh the advantages of increased group size. Our data indicate that the perception of cover quality is reciprocally context dependent on predator–prey interactions for both predator and prey. Applications of the two most selected plant treatments in this study could enhance structural habitat for both predator and prey fishes in reservoirs, adding to their already reliable functionality as supplemental forage crops for terrestrial wildlife.
Introduction
Man-made reservoirs provide invaluable services to societies globally, but require severe environmental alterations. In particular, flood control reservoirs experience large annual disturbances as water level fluctuations seasonally expose and inundate an interval of shoreline elevations known as the regulated zone (Miranda 2017). Because these reservoirs were created to catch flood waters, their water cycle has been altered to temporally mismatch natural flood cycles, which suppresses establishment of aquatic and wetland plants in the regulated zone during the plant growing season (Beard 1973; Bayley 1995; Baldwin et al. 2001; Greet et al. 2013). Cover is a key habitat component for most fish species, but access to submerged structural cover is often inadequate in the regulated zone of drawdown reservoirs. Growth of upland vegetation during drawdowns temporarily provides submerged structure once flooded before it degrades, but drawdowns occur in late autumn and winter, mostly outside the growing season. The resulting barren mudflats are typically featureless, as they are smoothed out by wave action, erosion, and sedimentation (Miranda 2017). Littoral zones in lotic systems often possess heterogeneous habitat of varying architectural complexity that can serve as spawning habitat for adult fish and critical nursery habitat for juvenile fish (Winfield 2004). In drawdown reservoirs, the culmination of factors that homogenize the lake bottom can limit species diversity (Hatcher et al. 2019) and the productivity of fish assemblages by reducing reproductive success (Hassler 1970; Sutela et al. 2002; Zohary and Ostrovsky 2011) and juvenile recruitment (Heman et al. 1969; Ploskey 1983). The negative consequences of periodic drawdowns on aging reservoirs lead many natural resource managers to habitat enhancement to promote fish communities (Strange et al. 1982; Ratcliff et al. 2009; Norris et al. 2020).
One option for structural habitat enhancement during drawdowns is sowing shorelines with fast-growing cool season agricultural plants that can reach maturity after drawdown and before flooding (Norris et al. 2020). Upon inundation, plantings could provide complex habitat to local biota. Applications of cereal barley Hordeum vulgare, fescue Festuca sp., sudangrass Sorghum bicolor var. sudanese, sorghum S. bicolor sudangrass hybrid, and rye Secale cereale have been used for nutrient additions to the water column, turbidity reductions, and improved structural refuge for juvenile black bass Micropterus spp. (Hulsey 1959; Strange et al. 1982; Ratcliff et al. 2009). These studies reported increased juvenile densities in seeded areas compared with unseeded areas. However, the few plant species that were tested were not compared with one another in terms of benefit to fish. Additionally, the plantings of Strange et al. (1982) and Ratcliff et al. (2009) were intended to serve as juvenile fish refuge, but observations of adult piscine predator activity were limited. More information is needed on how fish use agricultural plants and if plants affect predator–prey interactions to understand what effects plantings may have on habitat enhancement.
Understanding how different agricultural plants affect predator–prey interactions could inform decisions about planting. Plants that conceal prey fish and exclude large predatory fishes could be used as refugia and potentially boost recruitment of structure-oriented juvenile fishes. Other plants that can be accessed by predators and prey and that facilitate moderate levels of predation could reduce predation enough to sustain prey fish populations while improving growth of predators by increased forage abundance (Sass et al. 2006). Plant-specific variations in growth and architecture could influence prey capture success and possibly explain differences in plant selection by fish. Perhaps one of the easiest quantified and widely used whole-plant architecture metrics is stem density, which alters the behavior and feeding activities of fishes (Savino and Stein 1982; Crowder and Cooper 1982; Gotceitas and Colgan 1987, 1990). Stem density describes plant structures on a horizontal axis, and when used in combination with a plant's height, most of the plant architecture is described. Studying the influence of stem density and height on plant selection by common reservoir fishes could provide insight on why certain plant species are selected and how they may influence predator–prey dynamics and may inform environmental management programs.
We performed a controlled mesocosm experiment to investigate how various agricultural plants mediate interactions between a predator, adult Largemouth Bass M. salmoides, and prey, juvenile Bluegills Lepomis macrochirus. We selected these species because they are ubiquitous in reservoirs throughout North America, Largemouth Bass prey on Bluegills, and Bluegills use vegetation to reduce predation risk (Savino and Stein 1982; Mittelbach 1981; Gotceitas and Colgan 1987; Werner and Hall 1988). Specifically, our objectives were to 1) observe behavioral patterns of predator and prey in the presence and absence of one another in submerged agricultural plants, and 2) determine the selection of agricultural plants by Largemouth Bass and Bluegills in the presence and absence of one another. We hypothesized that selection of plant characteristics will differ between fish species, and that both behavior and selection will differ for both species in the presence and absence of one another. We predicted that Bluegills would select the highest stem densities and heights because of greater concealment from predators offered by the denser foliage, and that Largemouth Bass would select intermediate stem densities and heights for ease of prey capture. We also predicted that prey and predator would select a wider variety of plant species and plant characteristics when separated, but selected varieties would be narrower when the two species are in the presence of one another.
Methods
Experimental plants
We selected seven cultivars (i.e., varieties of domesticated agricultural plant species) that could be planted during autumn when reservoir bottoms are typically exposed, that tolerate low-quality untreated soils of reservoir substrates, and that could be readily available for purchase in bulk from seed companies (Table 1). There were two clover species (family Fabaceae, legumes) and four grass species (family Poaceae). These species have diverse architectures, differing in compactness and height. Moreover, they require minimal seedbed preparation for planting, reach maturity prior to submergence from reservoir filling in the spring (Coppola et al. 2019), and have been used successfully in reservoir-regulated zones (Norris et al. 2020). Thus, the seven plant cultivars along with an unseeded treatment served as our eight plant treatments.
Plant cultivation
We filled plastic nursery pots (n = 42, diameter = 15 cm, height = 12 cm) with commercial topsoil and hand-sown seeds in the upper 1 cm of soil. We used cultivar germination and purity ratios to determine seeding rates to ensure that each pot received the correct percentage and quantity of pure live seed (Harper 2008). Plants grew outdoors beneath a hoop frame structure outfitted with herbivore-excluding netting. We applied fertilizer to pots every 2 weeks (6:2:1, N:P:K ratio). At the beginning of the mesocosm experiment, we recorded stem density (number of stems) and maximum height (cm) for all pots. We used six pots filled with soil from the same source as the plants as an unseeded treatment with no vegetation meant to resemble barren conditions of reservoir mudflats. We used commercial topsoil rather than soils from the regulated zone to standardize substrates and eliminate confounding factors (e.g., highly variable macronutrient and pH levels of reservoir substrates; see Norris et al. 2020) that could influence growth.
Predator and prey fish
We used hatchery-raised Bluegills (n = 260; mean total length [TL] = 72 mm ± 6 mm SD) as the prey species and wild Largemouth Bass (n = 30; mean TL = 254 mm ± 33 mm SD) collected from a nearby pond as the predator species. All fish originated from sources within Oktibbeha County, Mississippi. We selected this size range of Bluegills because they depend on submerged vegetation as refuge in natural settings (Werner and Hall 1988). We selected the Largemouth Bass size range to mirror previous studies of predator–prey behavior mediated by habitat (Savino and Stein 1982; Gotceitas and Colgan 1987, 1990; McCartt et al. 1997). We housed prey and predator fish separately in two outdoor flow-through tanks (6,400 L) for 3 weeks before the first trial. Bluegills consumed commercially prepared pellet feed until satiation 3 d per week. We fed Largemouth Bass live, locally captured Bluegills at approximately 2% body weight 3 d per week (Barrows and Hardy 2001) and starved them between 24 and 72 h before trials. Water temperatures in the holding tanks averaged 22°C (±3°C SD) and dissolved oxygen (DO) averaged 7.4 ppm (±0.8 ppm SD); these values were not significantly different among tanks (two-sample t-tests, temperature [t = −0.4, df = 32, P = 0.66] and DO [t = 1.2, df = 28, P = 0.24]; Data S1, Supplemental Material).
Experimental arenas
We used three circular flow-through fiberglass tanks (2.44 m diameter, 1.37 m height) as the experimental arenas that were in the same location as the holding tanks. Circular wood platforms fit to the circumference of the tanks raised the bottom of tanks 21 cm to make the vegetation–soil interface even with the bottom of experimental arenas. Foam padding filled any gaps between the tank walls and platforms to prevent fish from entering space below the platforms. We cut two concentric rings of eight equally spaced holes into the platforms to hold the pots so that their tops were flush with the platform (Figure 1). The exterior and interior rings were 15 cm and 56 cm from the tank wall. We selected this arrangement to determine the effect of the tank wall on habitat selection of fish because other studies identified an affinity of Bluegills to tank walls (Savino and Stein 1982; Moody et al. 1983; Gotceitas and Colgan 1987, 1990; DeVries 1990). We fixed plastic rods suspending two cameras across the top of each arena, each recording one half of the arena during all experimental trials. Review of video footage following trials facilitated fish behavior observations. Cotton sheets stretched over the camera arrangements covered the arenas to reduce glare and disturbances to fish while still allowing sunlight to penetrate and illuminate the tanks. Water drawn from a well filled tanks and maintained a depth of 76 cm (55 cm above platform). We held the flow of new water into tanks and aeration constant when trials were not taking place. We prepared plants for inundation by covering exposed soil with a layer of gravel to prevent suspension and by fixing bricks to the bottom of pots to reduce buoyancy. These conditions differ to those experienced in a reservoir mudflat. However, the purpose of this experiment was not to simulate reservoir conditions, but to create a controlled, easily observed environment with the experimental plants all equally available to fish for selection.
Experimental design
We first randomly assigned two replications of each plant treatment to each of the three tanks, and they remained in their respective tanks for the duration of the study (N = 6 per cultivar). Plant submergence began 2 d before the first trial, and plants remained underwater for 10 d total until the last trial. Before each trial, we rearranged plant configurations by first randomly assigning one pot of each plant treatment to the interior and exterior ring of platform holes, then we randomized the plant treatment order within rings. We replicated pot and plant treatment arrangements for all three tanks for each set of trials. During each rearrangement, we gently moved plants underwater to not damage plants or interrupt submergence. If rearrangement caused suspension of soil, then we removed coarse material using a fine mesh skimmer and added new water to tanks until there were no visible traces of soil. To assess if submergence altered plant architecture, we recorded stem density and maximum height at the end of the 10-d submergence period by measuring both variables in each pot while plants were still submerged.
Each arena housed one or two 0.5-h trials per day for 8 d. We initiated trials in the three arenas within 0.25 h of one another so that they were simultaneous. However, camera failure resulted in fewer tanks being observed at once during some sets of trials. We randomly assigned three ecological conditions, prey only (PY), predator only (PD), and prey and predator (PP), to the arenas for each set of trials. We began PY trials by releasing 10 Bluegills into the center of each tank for a 0.5-h acclimation period. Following acclimation, we counted fish in all regions of the tanks (see below) at 5-min intervals (Savino and Stein 1989) for 0.5 h, resulting in six observations. The PD trials followed the same protocol as the PY trials except using a single Largemouth Bass. We adapted the PP trials from Chick and McIvor (1997), and they consisted of releasing 10 Bluegills into the center of the tank and a Largemouth Bass in a permeable 62.5 L (60 cm × 41 cm × 34 cm) container placed in the tank for a 0.5-h acclimation. The container separated predator and prey while allowing both to acclimate to tank conditions. We released the predator following acclimation and counted prey and predator locations at 5-min intervals for 0.5 h. We collected fish following trials by using a backpack electrofishing unit to immobilize fish and removed them using a dipnet to minimally disturb plants. We measured TL of fish before each trial and did not use the same fish in more than one trial. In total, our sample size was 12 PY, 15 PD, and 9 PP trials.
Fish behavior
We counted the number of fish demonstrating different types of behaviors during each of the six sampling intervals previously described. For Largemouth Bass, behaviors were searching (moving), following (orienting toward prey), or inactive (motionless; Savino and Stein 1982). Behaviors of Bluegills were schooled (aggregating together while moving), shoaled (aggregating together but stationary), or dispersed (moving or stationary but not within the immediate vicinity of a conspecific; Pitcher 1986).
Selection predictors
For each of the six observations per trial, we used four predictors (i.e., tank region, plant treatment, plant stem density, and plant maximum height) to spatially divide arenas into different zones based on different features (Figure 1; Table 2). All predictors tested were categorical, and their classes each described independent zones of the tank. We summed observations that fell within zones. To describe the effect of tank wall on selection, the tank region variable delineated concentric zones that differed in their distance from the walls. The plant treatment variable described the cultivar each fish observation was near, and we described all other locations within the tank not adjacent to plants like the tank region variable. The stem density and maximum height variables each described the density of stems and the maximum height of plants that observations were near, and we grouped all other locations within the tank into a no vegetation category. We discretized these two numeric growth metrics into ordinal predictors by binning pots into four categories for each variable of their respective magnitude of maximum height and stem density with a fifth no vegetation category (Table 2).
Statistical analyses
Plant architecture before and after submergence.
The test plants when submerged reportedly may continue to grow for a few days and eventually decay at different rates (Coppola et al. 2019). Because of this, it was important to determine the extent that submergence affected plants. To monitor the effect of submergence on plants, we compared their heights and stem densities among treatments before and after submergence using repeated measures analysis of variance models followed by Tukey's least significant difference tests for pairwise comparisons. The categorical independent variables were cultivar, time (i.e., before or after submergence), and their interaction, and we included a within-subject error term that specified the individual pot that we repeatedly measured. We log10 transformed stem densities to satisfy the homogeneity of variance assumption (we added one to stem densities because two arrowleaf clover pots had zero stems at the end of the experiment; see Discussion). To reduce type-1 error rates of the pairwise comparisons, we adjusted α to 0.007 using a Bonferroni correction.
Behavior of fish.
For each trial, we summed frequencies of each behavior to have one observation for each behavior per trial. We compared the behavior of fish in different ecological conditions (i.e., PD, PY, and PP) using generalized linear models (GLMs). We used counts of fish as the interval response variable and behavior as a grouping categorical independent variable along with ecological condition and their interaction. We used Poisson distributions when dispersion parameters were <2; otherwise, we used quasi-Poisson models and defined the mean-variance relationships as the variance being equal to the product of the mean and dispersion parameters (Zuur et al. 2009). We did not include tank as a random or fixed effect because preliminary analysis indicated that tank did not have a significant effect. There was no predation or mortality in the trials.
Use, availability, and selection.
We tested for selection or avoidance of the predictor categories by comparing observed to expected proportions (Neu et al. 1974). We integrated counts of fish within trials by summing counts in each predictor category, resulting in one observation for each predictor category per trial. We then summed observations across trials for each predictor category and ecological condition. We determined proportional use by dividing the integrated fish counts of the predictor categories by the total fish counts of the predictor in each ecological condition. We defined expected use of each predictor category as the two-dimensional area of each category available to fish divided by the total area of all categories available (Figure 1). We used chi-square goodness-of-fit tests to examine the null hypothesis that use and availability proportions did not differ. We tested species separately in the presence and absence of one another. Due to low Largemouth Bass use values that resulted from total counts of a single fish in tanks (as opposed to 10 fish in Bluegill trials), we simulated Largemouth Bass P values via permutational tests with fixed margins (Patefield 1981).
When chi-square tests were significant (P < 0.05), we estimated simultaneous confidence intervals of the true proportion of use for each category. We obtained the 95% simultaneous confidence intervals for multinomial proportions using Goodman's (1965) estimation, which produces shorter intervals with lower error rates than other methods (Cherry 1996). When expected values were not large enough (<5), we estimated the confidence intervals based on truncated Poisson distributions, as described by Sison and Glaz (1995). We used the DescTools package (Signorell 2020) of R statistical software version 4.0.3 to estimate all confidence intervals (R Core Team 2020). We calculated selection or avoidance by dividing the estimates of use and their 95% confidence intervals by their respective proportions of availability. If the resulting confidence intervals overlapped one, then use was random, if they were >1, then selection occurred, and if they were <1, then avoidance occurred. We used the estimated confidence intervals of use to compare the magnitude of selection or avoidance among predictor categories.
Results
Vegetative structures of plant treatments available to fish differed in their height (F = 21; df = 6,35; P < 0.01; Figure 2; Data S2, Supplemental Material). Before submergence, arrowleaf clover was significantly shorter than all treatments except balansa clover and wheat. The ryegrasses and oat were all similar heights and were taller than balansa clover and wheat before submergence, but not significantly. Triticale was significantly taller than all plant treatments. The effect of time (10-d submergence) on the maximum height of plant treatments was not significant (F = 2; df = 1,35; P = 0.2); however, time significantly interacted with plant treatments (F = 12; df = 6,35; P < 0.01), indicating opposing trends in height among treatments (Figure 2). The height of the ryegrasses increased following submergence (Figure 2), while the height of all other treatments decreased. Pairwise comparisons of each treatment's height before and after submergence indicated that triticale was the only treatment that significantly differed from its presubmergence height (Figure 2).
The stem densities of plants differed among cultivars (F = 55; df = 6,35; P < 0.01; Figure 2; Data S2, Supplemental Material). Before submergence, oat, triticale, and wheat had significantly lower stem densities than all plant treatments except arrowleaf clover. The ryegrasses had similar stem densities and were higher than arrowleaf clover before submergence; however, Marshall ryegrass stem density was not significantly higher than arrowleaf clover. Balansa clover stem density was significantly higher than all cultivars and was on average 5.4 times greater than the cultivar with the second highest stem density, Marshall ryegrass. Time significantly affected stem densities of treatments (F = 79; df = 1,35; P < 0.01) and significantly interacted with plant treatments (F = 14; df = 6,35; P < 0.01). This can be explained by the clovers significantly decreasing in stem densities while all other treatments remained relatively unchanged (Figures 2). Two arrowleaf clover pots in separate tanks completely degraded and did not have vegetation by the end of the experiment, while the other four arrowleaf clover pots had, on average, 1.5 stems. All other treatments had vegetation at the end of the 10-d submergence period.
Observed predator and prey behaviors depended on the presence or absence of one another. Proportional distribution of Bluegill behaviors differed (quasi-Poisson GLM, dispersion parameter = 3.5; F = 11; df = 2,60; P < 0.01) and was influenced by Largemouth Bass presence (F = 67; df = 2,57; P < 0.01; Figure 3; Data S3, Supplemental Material). Without a predator, Bluegills primarily dispersed (58% total observations), and schooling was the least common behavior (18%). When a predator was present, few Bluegills dispersed (9% total observations), and schooling was the dominant behavior (71%). Similarly, proportional distribution of Largemouth Bass behaviors differed (Poisson GLM, X2 = 20; df = 2,60; P < 0.01) and was significantly influenced by Bluegill presence (X2 = 6; df = 1,56; P < 0.01; Figure 3). When alone, Largemouth Bass were primarily inactive (69% total observations); when with Bluegills, Largemouth Bass spent a similar amount of time searching (46%) as inactive (41%). There were few observations of Largemouth Bass following prey.
Fish selected for tank region, plant treatment, plant stem density, and maximum height, as indicated by statistically significant chi-square tests (Table 3; Data S3, Supplemental Material). Inspection of 95% confidence intervals of selection revealed that Bluegills selected the interior region of the tank when a predator was not present but transitioned to selecting the exterior region when a predator was present (Figure 4). Both species of fish avoided the surface regardless of ecological condition; however, avoidance was significantly less evident (nonoverlapping 95% confidence intervals) for Bluegills when in the presence of Largemouth Bass. Selection of tank regions by Largemouth Bass (Figure 4) remained unchanged with and without prey, where Largemouth Bass selected the exterior of the tank and avoided the interior and surface.
Selection of most plant treatments by both species changed little between ecological conditions. Plants selected by Bluegills were triticale, balansa clover, and both cultivars of ryegrass in the absence of a predator; however, none differed from the unseeded treatment (Figure 4; Data S3, Supplemental Material). Fish outside of plant boundaries selected for the interior and avoided the exterior and surface. When Bluegills and Largemouth Bass occupied the same tank, Bluegills selected for balansa clover and Marshall ryegrass and avoided the interior and surface. Selection of balansa clover was significantly greater than selection of the unseeded treatment (nonoverlapping 95% confidence intervals). When alone, Largemouth Bass selected balansa clover and both cultivars of ryegrass, all of which were very similar and not significantly different than the unseeded treatment (Figure 4). When Bluegills were present, Largemouth Bass selected areas outside of plant treatments in the exterior region but also selected Marshall ryegrass to a lesser extent. In all conditions, Largemouth Bass avoided the surface, and all treatments did not differ from the unseeded treatment. Although use of most plant treatments did not differ from unseeded pots, the 95% confidence intervals of proportions of use of unseeded pots in all situations overlapped with proportions of availability, meaning that all use of unseeded pots could be due to randomness. Because of this, comparisons of unseeded pots to other treatments that were selected are less meaningful.
Changes in selection of stem categories between ecological conditions were apparent for Bluegills but not for Largemouth Bass (Figure 4; Data S2, Data S3, Supplemental Material). Both species avoided sections with no vegetation in all ecological conditions. Prey selected for all stem densities greater than zero when alone and selected the low and very high categories most often. When combined with a predator, prey selected the very high category more often than all other stem density categories (nonoverlapping 95% confidence intervals) and did not select the low category. When prey were absent, Largemouth Bass selected the two highest stem categories (Figure 4). However, when prey were present, Largemouth Bass only selected the high density.
Selection of plant height differed between species and changed little between ecological conditions (Figure 4; Data S2, Data S3, Supplemental Material). Prey selected all plant height categories greater than zero with similar intensity. Predator selection generally increased with plant height when prey were absent, but they did not select the tallest category (21–35 cm). When the species were combined, Largemouth Bass selected the second tallest category (16–20 cm).
Discussion
Behavioral responses of predator and prey suggest that prey used vegetation for refuge when confronted with a predator but were not completely concealed by the vegetation. Previous studies showed that if enough structural refuge is present, then quantity of Bluegills demonstrating schooling behavior will either be unaffected or reduced by the presence of Largemouth Bass (Savino and Stein 1982, 1989). Similarly, another prey species known to congregate, the Eurasian minnow Phoxinus phoxinus, chose cover over grouping when few conspecifics and a predator were present (Magurran and Pitcher 1983, 1987). These studies suggest that if a threshold of suitable concealment by structural cover is not met, then prey fishes may use other antipredator behaviors. Savino and Stein (1982) reported that predatory activity of Largemouth Bass (i.e., searching, following, and attacking prey) was significantly reduced with increasing stem density of vegetation analogs. The low predator activity in our study may have resulted from the cover provided by vegetation to prey. Without a control ecological condition (i.e., predator and prey combined without vegetation) the true effect of plants on predation pressure is unknown; however, predator activity and visual orientation were not reduced enough to preclude prey antipredator behavior.
Affinity for regions within the tanks was likely driven by prey-pursuing and by predator-evading behaviors. Change in selection from interior to exterior of tanks by Bluegills when combined with a predator was most likely due to Bluegills maximizing distance away from the predator and using tank walls as refuge space. Edges of tanks can serve as refuge for cover-seeking Bluegills (Moody et al. 1983) that may choose the tank edge furthest from predators over other forms of available structure (Savino and Stein 1982; DeVries 1990; Gotceitas and Colgan 1990). Similarly, schooling at the surface may be selected over other forms of cover (Gotceitas and Colgan 1987). The increase in use of exterior walls and water surface by Largemouth Bass was likely driven by searching for prey in areas where they were previously detected.
The reciprocal changes in selection of balansa clover by both species could indicate its potential for nursery cover that excludes predators. A potential explanation for why Bluegills favored balansa clover and Largemouth Bass did not is that clovers grow short, broad leaves that form dense crown rosettes (Hall 2008; Harper 2008) compared with grasses, which generally form long leaves that grow vertically parallel to stems and allow for more light to penetrate through the canopy (Gibson et al. 2008; Mohammad et al. 2011). The leaf morphology of balansa clover along with its high stem density could have provided more concealment than other treatments. However, it is unclear whether Bluegills selected balansa clover based on leaf morphology, stem density, or both because no other cultivar with a different leaf morphology grew similar stem densities. Moreover, Bluegills also selected Marshall ryegrass to a similar extent as balansa clover, with the two differing in morphology and significantly differing in stem densities, although Marshall ryegrass was among the top three highest cultivar stem densities. For plant height, some categories possessed both selected and nonselected plant treatment cultivars (e.g., arrowleaf and balansa clovers). This discrepancy suggests that plant height did not influence selection by prey; however, the range of plant heights may have been too small for a discernable relationship. It is likely that Bluegills selected crops based on stem densities rather than plant height or other unaccounted for morphological characteristics (e.g., leaf morphology), although this experiment was not designed to determine selection of fine-scale characteristics unique to each plant treatment. The general pattern of selection of high stem densities by prey in this experiment was similar to other experimental and observational studies where small-bodied prey species select for denser and more complex habitats as refuge (Gotceitas and Colgan 1987, 1990; Hayse and Wissing 1996; Yeager and Hovel 2017).
Although plant height and stem density were affected by the 10-d submersion, plant selection by fish likely was not affected during this time. This is because plants did not change much in terms of how they generally compared with one another from greatest to least height and stem density. The clover species appeared to be the most affected by submergence, which agrees with other studies that identified that clovers degrade rapidly following submergence (Coppola et al. 2019). Regardless, balansa clover demonstrated the greatest change in stem density; however, it still possessed, on average, the most stems compared with all other treatments following submergence and was one of the highest selected cultivars by both species of fish. The stem density of arrowleaf clover also demonstrated significant deterioration following submergence; however, it had a low stem density initially, and so time probably did not affect selection. Although the height of triticale was significantly lower after submergence, it was still among the top three tallest cultivars following submergence, the other two being the ryegrasses that continued to grow throughout the experiment. The postsubmergence growth of the annual ryegrasses has been observed in other studies and demonstrates a tolerance of brief flooding events (Coppola et al. 2019); however, vegetative structures are more vulnerable to degradation following the rapid growth (Sauter and Kende 1992).
The patterns of plant use by fish in this mesocosm study partially agree with other studies documenting reservoir mudflats enhanced with agricultural plants. Like Bluegills' high selection of annual ryegrass in this study, plantings of cool season annual grasses seeded by Strange et al. (1982) were marked by higher abundances of juvenile fishes than unseeded areas. Additionally, plantings of barley H. vulgare, a cereal grain like those in our study (i.e., oat, wheat, and triticale), had significantly higher densities of age-0 black bass Micropterus spp. than unplanted shorelines (Ratcliff et al. 2009). Use of cereal grains in our study was low, most likely due to the presence of other treatments demonstrating more favorable habitat quality, such as balansa clover and Marshall ryegrass (discussed further below). Of the three cereal grains tested, plantings of triticale could provide favorable habitat for juvenile fish. This is because Bluegills selected triticale (predator absent), and triticale demonstrated an ability to retain maximum height and complexity for longer than both balansa clover and Marshall ryegrass (Coppola et al. 2019).
The increased use of minimally vegetated areas of tanks (i.e., no vegetation and low stem densities; Figure 4) by predators when prey were present was most likely driven by attraction toward areas where prey were detected. Largemouth Bass are visually oriented predators that follow their prey (Savino and Stein 1982, 1989; Anderson 1984). Foraging efficiency of Largemouth Bass decreases in dense vegetation (Savino and Stein 1982) and can induce changes in diet to less mobile prey (Anderson 1984). Searching Largemouth Bass that are responding visually to prey will likely use areas where prey are visible more than where prey are not visible. Additionally, higher stem densities reduce the sizes of gaps between stems that can exclude large fishes (Johnson et al. 1988). However, this likely did not influence our observations because areas next to pots were included so fish of all sizes could access all stem densities.
Management implications
The results of this mesocosm study suggest that cool season agricultural plants may differ in their value as structural habitat to fish. Plants that grew high stem densities, such as balansa clover, may provide habitat for refuge-seeking prey fishes. Thus, plantings of balansa clover could be used in situations where enhanced nursery habitat is the primary management objective. Annual ryegrass cultivars, especially Marshall, could potentially be used for enhancements that target the entire fish community or larger-bodied adult fishes. However, selection of plants in reservoir environments may be different because submerged plantings could benefit other trophic levels, similar to how macrophytes do in other lotic systems. For example, benthic and epiphytic invertebrate communities increase in abundance and species richness with increasing periphyton, detritus, refuge, and living space afforded by submerged vegetation (Cyr and Downing 1988; Schramm and Jirka 1989; Jeffries 1993). Marshall ryegrass outperformed all crops and natural vegetation when planted on reservoir mudflats (Norris et al. 2020) and persisted once submerged for up to 3 months (Coppola et al. 2019). The results of our study further validate Marshall ryegrass suitability for reservoir mudflat applications. Annual ryegrass is native to Europe and is extensively used in the United States as a supplemental forage crop for livestock and wildlife (Harper 2008). Simultaneously, it is listed as an invasive species in some parts of the United States (USDA 2020a). Applications in reservoir regulated zones will most likely be short lived due to prolonged submergence during years of normal precipitation (Coppola et al. 2019), thus precluding long-term establishment. However, drought may leave plantings exposed during spring and summer, facilitating introductions to upland habitats. An ecologically conservative alternative to annual ryegrass could be triticale, a hybrid with no documented occurrences of naturally reproducing populations in North America (USDA 2020b) that will likely not compete well with established upland plant communities due to slow growth during early life phases (Salmon et al. 2004).
Balansa clover has high potential as a mudflat enhancer in regulated zones, but caution should be used when considering this species. Balansa clover grew poorly on reservoir mudflats seeded by Norris et al. (2020), but this might have been due to low seeding rates, substandard soils, and drought rather than species performance. Maturing balansa clover specimens degraded rapidly when submerged in experimental tanks (Coppola et al. 2019); however, fully matured plants were not tested. A conservative approach to using balansa clover could be to mix it with a more tolerant and durable plant, such as Marshall ryegrass or triticale, that would enhance fish habitat regardless of balansa clover's performance. This would also reduce the total costs of using grasses with high seeding rates (e.g., triticale) because balansa clover is relatively affordable (Harper 2008). Mixing legumes with grasses is a common technique that can improve the establishment of plantings. In grass–legume mixtures, grasses germinate and establish quickly, thereby partially acting as a weed suppressant and erosion control while legumes improve nutrient availability in the soil by fixing nitrogen (Harper 2008). Mixed plantings of the two top-performing species in this study, balansa clover and Marshall ryegrass, significantly increase total yield of biomass compared with ryegrass monocultures when planted in unfertilized conditions (Santos et al. 2015). Additionally, mixtures may maximize structural heterogeneity and minimize the risk of either species failing to grow in harsh environments or persist following inundation.
Supplemental Material
Please note: The Journal of Fish and Wildlife Management is not responsible for the content or functionality of any supplemental material. Queries should be directed to the corresponding author for the article.
Data S1. Bluegill Lepomis macrochirus and Largemouth Bass Micropterus salmoides Holding Tank Temperature (°C) and Dissolved Oxygen (ppm) Measurements. Holding tanks consisted of two 6,400-L outdoor flow-through tanks. The fish in these holding tanks served as our experimental animals that provided information on selection of habitat-enhancing plants. We housed the two species separately in two holding tanks and recorded measurements approximately once daily. Holding tanks were in the same location as experimental arenas beneath a pavilion and received constant aeration and flow of new well water. We transported fish to holding tanks from sources within Oktibbeha County, Mississippi, 3 weeks before the experiment. We conducted the experiment at Mississippi State University's South Farm Aquaculture Facility during May 2018.
Available: https://doi.org/10.3996/JFWM-20-083.S1 (11 KB XLSX)
Data S2. Experimental Potted Plant Maximum Height (cm) and Stem Density (n/pot) Measurements Before and After Being Submerged for the Duration of the Experiment. Plants were those we presented to Bluegills Lepomis macrochirus and Largemouth Bass Micropterus salmoides in experimental arenas to determine selection by fish and to assess how they mediate predator–prey interactions. We conducted the experiment at Mississippi State University's South Farm Aquaculture Facility during May 2018. Sample describes the time period (pre = before submergence, post = after experiment), tank ID (1–3) describes the experimental arena the plant was submerged in, pot ID is a unique identifier for each individual pot, and plant treatment describes the cultivar of the plant. Experiments consisted of multiple trials of adding and removing fish from tanks. Between each trial, we gently moved plants underwater to different locations within their respective tanks, and we did not interrupt submergence for the duration of the experiment.
Available: https://doi.org/10.3996/JFWM-20-083.S2 (13 KB XLSX)
Data S3. Raw Count Data of Bluegills Lepomis macrochirus and Largemouth Bass Micropterus salmoides and Their Behavior in Regions of Experimental Arenas. We used this information to assess selection of submerged habitat-enhancing plants by fish in the experimental arenas and whether predator–prey interactions influenced selection. We recorded observations every 5 min for 30 min, resulting in six observations total for each region of the experimental arena for each trial. For each set of trials (trials consisted of between one and three tanks being initiated at once), we recorded the date, trial set number (1–17), ecological condition (predator [PD] = 1 predator in an experimental arena, prey [PY] = 30 prey in an experimental arena, and prey and predator [PP] = 1 predator and 30 prey in an experimental arena), and tank ID (1–3, unique identifier for each experimental arena). Sample time (minutes 5–30) describes the six repeated measures of each tank region. Plant position describes the sequential clockwise position of each pot in the exterior ring (1–8) and the interior ring (9–16), and we randomly assigned plant position to pot IDs between trials. Pot ID is the unique identifier assigned to each individual potted plant. Plant treatment describes the plant cultivar and regions outside of plants. For each sample time, we recorded observations of Bluegills (n), Largemouth Bass (n), schooled Bluegills (n, quantity of Bluegills schooling), shoaled Bluegills (n, number of Bluegills shoaling), dispersed Bluegills (n, number of Bluegills dispersed), and Largemouth Bass behavior (stationary = motionless, search = moving, or follow = pursuing Bluegills) within each tank region. Schooled fish were those aggregating together while moving, shoaled were aggregating together but stationary, and dispersed were those moving or stationary but not within the immediate vicinity of a conspecific (Pitcher 1986). We conducted the experiment at Mississippi State University's South Farm Aquaculture Facility during May 2018.
Available: https://doi.org/10.3996/JFWM-20-083.S3 (225 KB XLSX)
Reference S1. Beard TD. 1973. Overwinter drawdown: impact on the aquatic vegetation in Murphy Flowage, Wisconsin. Madison, Wisconsin: Wisconsin Department of Natural Resources. Technical Bulletin No 61.
Available: https://doi.org/10.3996/JFWM-20-083.S4 (1.55 MB PDF)
Reference S2. Ploskey GR. 1983. A review of the effects of water level changes on reservoir fisheries and recommendations for improved management. Vicksburg, Mississippi: U.S. Army Engineers Waterways Experiment Station. Technical Report E-83-3, contract number WESRF 82-24.
Available: https://doi.org/10.3996/JFWM-20-083.S5 (4.38 MB PDF)
Reference S3. Salmon DF, Mergoum M, Gómez-Macpherson HG. 2004. Triticale production and management. Pages 27–36 in Mergoum M, Gómez-Macpherson H, editors. Triticale improvement and production. Rome, Italy: Food and Agriculture Organization of the United Nations. FAO Plant Production and Protection Paper 179.
Available: https://doi.org/10.3996/JFWM-20-083.S6 (576 MB PDF)
Acknowledgments
Funding was provided by the Mississippi Department of Wildlife, Fisheries and Parks and the Reservoir Fisheries Habitat Partnership. Members of the American Fisheries Society Mississippi State University Student Sub-Unit aided in Largemouth Bass collection. The Associate Editor, anonymous reviewers, and Jonathan Spurgeon provided constructive reviews and feedback. Fish were handled humanely, as outlined by the Use of Fishes in Research Committee (2014).
Any use of trade, product, website, or firm names in this publication is for descriptive purposes only and does not imply endorsement by the U.S. Government.
References
The findings and conclusions in this article are those of the author(s) and do not necessarily represent the views of the U.S. Fish and Wildlife Service. | https://meridian.allenpress.com/jfwm/article/12/2/294/465627/Selection-of-Habitat-Enhancing-Plants-Depends-on?searchresult=1 |
Mechanisms allowing the persistence of an aquatic predator-prey system in tiny pools held by taro axils were analyzed and found this system may be prey-dominated in that predator persistence depends on prey community existence but prey community structure depends less on predation.
Species richness and altitudinal variation in the aquatic metazoan community in bamboo phytotelmata from north Sulawesi
- Environmental ScienceResearches on Population Ecology
- 2006
Among dominant taxonomic groups, the number of non-predatory culicid species per stump was smaller at the lowland site where their predator,Toxorhynchites, was more abundant, although both sites had the same number of culicids.
Cross-habitat predation in Nepenthes gracilis: the red crab spider Misumenops nepenthicola influences abundance of pitcher dipteran larvae
- Environmental Science, BiologyJournal of Tropical Ecology
- 2011
These results are one of the first to demonstrate the influence of a terrestrial phytotelm forager on the abundance of pitcher organisms via direct predation, reiterating the ecological importance of terrestrial phytopelm predators on phytOTelm community structure and dynamics.
Dipteran larvae and microbes facilitate nutrient sequestration in the Nepenthes gracilis pitcher plant host
- BiologyBiology Letters
- 2017
It is shown that niche segregation occurs between phorid and culicid larvae, with the former fragmenting prey carcasses and the latter suppressing fluid microbe levels, and that pitcher communities facilitate nutrient sequestration in their host.
Metabarcoding as a tool for investigating arthropod diversity in Nepenthes pitcher plants
- Biology, Environmental Science
- 2016
Network of core arthropods and their host species were used to investigate degree of host specificity across multiple hosts, and this revealed significant specialization of certain arthropod fauna.
Life history and biology of Hormosianoetus mallotae (Fashing) (Histiostomatidae: Astigmata), an obligatory inhabitant of water-filled treeholes
- Biology
- 2010
ABSTRACT Water-filled treeholes are in reality tiny ponds in the woodland ecosystem and provide a unique habitat for a number of different organisms that make up the treehole community. The community…
Searching clusters of community composition along multiple spatial scales: a case study on aquatic invertebrate communities in bamboo stumps in West Timor
- Environmental SciencePopulation Ecology
- 2004
It is concluded that the community in this area was spatially heterogeneous at stump and site levels, and suggested that the site level habitat heterogeneity might reduce the chance of encounters between two predators, the larvae of the Toxorhynchites mosquito and the Brachyceran fly.
The crab spider–pitcher plant relationship is a nutritional mutualism that is dependent on prey‐resource quality
- Environmental ScienceThe Journal of animal ecology
- 2019
The crab spider-pitcher plant interaction is identified as a type of resource conversion mutualism and the quality component in such interactions is the amount of the underlying resource contained in each unit of resource processed, which is suggested to reduce the quality of the resource it processes while increasing its availability to the nutrient recipient species.
Preliminary checklist of the inquiline and prey species of Nepenthes ampullaria pitchers across vegetation types in Singapore
- Environmental Science
- 2019
The contents of 147 pitchers from the pitcher plant Nepenthes ampullaria were sampled from three sites across Singapore. The primary aim of the study was to compile a comprehensive checklist of the…
Convergent Interactions Among Pitcher Plant Microcosms in North America and Southeast Asia
- Environmental Science
- 2016
This dissertation combines conceptual theory with empirical data to explore how natural selection repeatedly favors particular associations among different interacting species in multispecies systems.
References
SHOWING 1-10 OF 84 REFERENCES
Competition and Coexistence with A Guild of Herbivorous Insects
- Environmental Science
- 1976
This guild demonstrates that overlap values may not equal competition coefficients and that high overlap may exist because competition is rare and that herbivorous insects may be among the least likely groups to exhibit the patterns predicted by competition theory.
Niche Relationships of a Guild of Necrophagous Flies
- Biology
- 1975
An investigation into the niche relationships of flies which exploit carrion as a larval food resource was made, supporting the Gauseian contention that species with identical niche requirements cannot coexist.
Predator-mediated, non-equilibrium coexistence of tree-hole mosquitoes in southeastern North America
- Environmental ScienceOecologia
- 2004
Unless experimentally demonstrated or reasonably inferred from circumstantial evidence, competition and coevolved niche shifts cannot be invoked to explain the coexistence of a diversity of species within a habitat type, no matter how circumscribed or discrete that habitat.
Survival, development and predatory effects of mosquito larvae in Venezuelan phytotelmata
- Environmental ScienceJournal of Tropical Ecology
- 1987
First instars of native Toxorhynchites haemorrhoidalis, a predatory mosquito, were released into Heliconia bracts, bamboo internodes, and the axils of two species of Aechmea bromeliads during wet and dry seasons in a lowland rain forest in eastern Venezuela to assess the influence of microhabitat and season on predator growth and sur- vival and prey community structure.
TEMPORAL AND SPATIAL DISTRIBUTION, GROWTH AND PREDATORY BEHAVIOUR OF TOXORHYNCHITES BREVIPALPIS (DIPTERA: CULICIDAE) ON THE KENYA COAST
- Environmental Science
- 1979
Prey density in selected bamboo traps was reduced more than twoto threefold by the presence of predator larvae during periods of peak abundance, suggesting that too few prey may limit growth of the predator under natural conditions.
Geographical variation in food web structure in Nepenthes pitcher plants
- Environmental Science
- 1985
Outlying species of Nepenthes in the Seychelles, Sri Lanka and Madagascar have fewer species of both prey and predator living in them, fewer and smaller guilds of species, much apparently empty niche space, less complex food webs, and a greater connectance.
Aggregation of Larval Diptera Over Discrete and Ephemeral Breeding Sites: The Implications for Coexistence
- BiologyThe American Naturalist
- 1984
An analysis of the processes that could lead to a negative binomial suggests that variation in breeding site quality or in the fecundity of female flies is unlikely to be the cause of such aggregation.
Laboratory experiments on factors affecting oviposition site selection in Toxorhynchites amboinensis (Diptera: Culicidae), with a report on the occurrence of egg cannibalism *
- Biology, Environmental ScienceMedical and veterinary entomology
- 1988
Observations of larval behaviour while oviposition was occurring suggested that egg numbers were reduced in containers because of egg cannibalism with third and fourth instar larvae, and not because the larvae caused a deterrent effect.
PRINCIPLES OF NATURAL COEXISTENCE INDICATED BY LEAFHOPPER POPULATIONS
- Environmental Science
- 1957
Drawing significant conclusions indicating that certain of the concepts of species ranges and ecological occupancy have been unnatural are found.
Coexistence of competitors in patchy environment with and without predation
- Environmental Science
- 1981
It is concluded, contrary to previous results, that equivalent predation may facilitate prey coexistence, given sufficient spatial variance but not much covariance in prey abundances, and provided that predators forage non-randomly, congregating in the high abundance patches. | https://www.semanticscholar.org/paper/Aquatic-arthropod-communities-in-Nepenthes-the-role-Mogi-Yong/1e9021772f1d614182ef02df16949f02a0fdd328 |
The Department of Biological Sciences has an expanding research base, which, in addition to providing leading researchers of national and international standing in these areas, most importantly underpins the delivery of teaching. Research in Biological Sciences at Chester can be divided into three broad groups of expertise, namely Animal Behaviour and Conservation, Food Nutrition and Health, and Stress and Disease.
Collections in this community
Recent Submissions
-
Preliminary investigation of the effects of a concert on the behavior of zoo animalsTo increase visitor footfall and engagement, zoos may host public events which may extend outside of typical opening hours. With plans to hold a 2-day concert at Tayto Park, Ireland, this study aimed to identify the behavioral response to the music event of a selected group of species in the zoo. Twenty-two species were observed across three Phases of the event (pre-, during and post-event). Specific behaviors of interest were categorized as active, resting, asleep, abnormal, and out of sight, with repeated observations being made at each enclosure during each Phase. Alongside these behavioral data, Sound Pressure Levels (SPLs) were concurrently recorded at the observation locations in terms of both dB(A) and dB(C). The median dB(C) levels during the event were found to be significantly higher (mdn = 64.5dB) when compared with both pre- (mdn = 60.7dB) and post-event Phases (mdn = 59.4dB), whilst dB(A) levels were only significantly higher during the event (51.7dB) when compared with the pre-event Phase (mdn = 49.8dB). We found some species-specific behavioral changes (mainly associated with active and resting behaviors) correlated with increased SPLs and/or event itself. However, the behavioral responses varied between species and there were numerous species which did not respond with any change in behavior to the increased SPLs or the event itself. This variation in response across species reinforces the need for monitoring of behavioral changes as well as consideration of their natural behavioral ecology when implementing appropriate mitigation strategies. Further research should be encouraged to provide evidence-based assessment of how music events may affect animal welfare and behavior and to test the efficacy of mitigation strategies that are implemented to safeguard animal welfare.
-
Human-controlled reproductive experience may contribute to incestuous behavior observed in reintroduced semi-feral stallions (Equus caballus)Equine reproductive behavior is affected by many factors, some remaining poorly understood. This study tested the hypothesis that a period of captivity during the juvenile period and human-controlled reproduction may potentially be involved in the disruption of the development of incestuous mating avoidance behavior in sanctuary-reintroduced male Konik polski horses. Between 1986 and 2000, cases of incestuous behavior in harem stallions born and reared until weaning in the sanctuary were studied. Eight males lived in the sanctuary’s feral herd for the rest of their lives (the non-captive group; nC). They gained their own harem of mares without human intervention (no human-controlled reproductive activity, nHC). Another five stallions were removed as weanlings, reared in captivity and then reintroduced as adults (captive, C). Three of these C stallions were used as in-hand breeding stallions, one as a “teaser” (human-controlled reproductive activity, HC) and one was not used for reproduction in captivity (nHC). Reproductive records for 46 mares, daughters of all 13 harem stallions, were scrutinized and cases of incestuous breeding were recorded by interrogation of foal parentage records. C stallions failed to expel more daughters than nC stallions (33% vs. 18%, P = 0.045), and mated with significantly more of them (28% vs. 11%, P = 0.025). Interestingly, HC stallions expelled fewer (60%) and successfully mated with more (33%) daughters that nHC stallions (84% expelled, P = 0.013, and 10% successful mating with daughters, P = 0.010). All HC stallions bred incestuously at least once. We propose that human intervention during a critical period of development of social and reproductive behavior in young stallions, by enforced separation from their natal herd and in-hand breeding, may contribute to their later aberrant behavior and disruption of inbreeding avoidance mechanisms in these stallions. The previous occurrence of human-controlled breeding may be one of the factors promoting incestuous behavior of stallions in natural conditions. The uninterrupted presence of stallions in their harems and herd member recognition may also play important roles in inbreeding avoidance in horses.
-
ABO Blood Groups Do Not Predict Schistosoma mansoni Infection Profiles in Highly Endemic Villages of UgandaSchistosoma mansoni is a parasite which causes significant public-health issues, with over 240 million people infected globally. In Uganda alone, approximately 11.6 million people are affected. Despite over a decade of mass drug administration in this country, hyper-endemic hotspots persist, and individuals who are repeatedly heavily and rapidly reinfected are observed. Human blood-type antigens are known to play a role in the risk of infection for a variety of diseases, due to cross-reactivity between host antibodies and pathogenic antigens. There have been conflicting results on the effect of blood type on schistosomiasis infection and pathology. Moreover, the effect of blood type as a potential intrinsic host factor on S. mansoni prevalence, intensity, clearance, and reinfection dynamics and on co-infection risk remains unknown. Therefore, the epidemiological link between host blood type and S. mansoni infection dynamics was assessed in three hyper-endemic communities in Uganda. Longitudinal data incorporating repeated pretreatment S. mansoni infection intensities and clearance rates were used to analyse associations between blood groups in school-aged children. Soil-transmitted helminth coinfection status and biometric parameters were incorporated in a generalised linear mixed regression model including age, gender, and body mass index (BMI), which have previously been established as significant factors influencing the prevalence and intensity of schistosomiasis. The analysis revealed no associations between blood type and S. mansoni prevalence, infection intensity, clearance, reinfection, or coinfection. Variations in infection profiles were significantly different between the villages, and egg burden significantly decreased with age. While blood type has proven to be a predictor of several diseases, the data collected in this study indicate that it does not play a significant role in S. mansoni infection burdens in these high-endemicity communities.
-
Behavioural Indicators of Intra- and Inter-Specific Competition: Sheep Co-Grazing with Guanaco in the Patagonian SteppeIn extensive livestock production, high densities may inhibit regulation processes, main- taining high levels of intraspecific competition over time. During competition, individuals typically modify their behaviours, particularly feeding and bite rates, which can therefore be used as indicators of competition. Over eight consecutive seasons, we investigated if variation in herd density, food availability, and the presence of a potential competitor, the guanaco (Lama guanicoe), was related with behavioural changes in domestic sheep in Chilean Patagonia. Focal sampling, instantaneous scan sampling, measures of bite and movement rates were used to quantify behavioural changes in domestic sheep. We found that food availability increased time spent feeding, while herd density was associated with an increase in vigilant behaviour and a decrease in bite rate, but only when food availability was low. Guanaco presence appeared to have no impact on sheep behaviour. Our results suggest that the observed behavioural changes in domestic sheep are more likely due to intraspecific competition rather than interspecific competition. Consideration of intraspecific competition where guanaco and sheep co-graze on pastures could allow management strategies to focus on herd density, according to rangeland carrying capacity.
-
Marginal habitats provide unexpected survival benefits to the Alpine marmotAge-specific survival trajectories can vary significantly among wild populations. Identifying the environmental conditions associated with such variability is of primary importance to understand the dynamics of free-ranging populations. In this study, we investigated survival variations among alpine marmot (Marmota marmota) families living in areas with opposite environmental characteristics: the typical habitat of the species (alpine meadow) and a marginal area bordering the forest. We used data collected during an 11-year study in the Gran Paradiso National Park (Italy) and performed a Bayesian survival trajectory analysis on marked individuals. Furthermore, we investigated, at a territorial level, the relationships among demographic parameters and habitat variables by using a path analysis approach. Contrary to our expectations, for most of the marmot’s lifespan, survival rate was higher in the marginal site closer to the forest and with lower visibility than in the alpine meadow site. Path analysis indicated that the number of families living close to each other negatively affected the stability of the dominant couple, which in turn affected both juvenile survival and reproduction. Given the lower number of neighbouring families which inhabited the marginal site and the potentially different predation pressure by the most effective predator in the area (Aquila chrysaetos), our results suggest that species adapted to live in open habitats may benefit from living in a marginal habitat. This study highlights the importance of habitats bordering the forest in the conservation of alpine marmots.
-
Heterospecific Fear and Avoidance Behaviour in Domestic Horses (Equus caballus)Ridden horses have been reported to be fearful of cows. We tested whether cows could provoke behavioural and cardiac fear responses in horses, and whether these responses differ in magnitude to those shown to other potential dangers. Twenty horses were exposed to cow, a mobile object or no object. The time spent at different distances from the stimulus was measured. In a separate test, heart rate (HR), root mean square of successive differences between heartbeats (RMSSD) and the horses’ perceived fear were assessed at various distances from the stimuli. The horses avoided the area nearest to all stimuli. During hand‐leading, the cow elicited the highest HR and lowest RMSSD. Led horses’ responses to the cow and box were rated as more fearful as the distance to the stimulus decreased. Mares had a higher HR than geldings across all tests. HR positively correlated with the fearfulness rating at the furthest distance from the cow and box, and RMSSD negatively correlated with this rating in cow and control conditions. Our results show that these horses’ avoidance response to cows was similar or higher to that shown towards a novel moving object, demonstrating that potentially, both neophobia and heterospecific communication play a role in this reaction.
-
Sex and age-specific survival and life expectancy in a free ranging population of Indri indri (Gmelin, 1788).The critically endangered indri (Indri indri) is the largest extant lemur species and its population size is projected to decline over the next three generations due to habitat loss, hunting and climate change. Accurate information on the demographic parameters driving the population dynamics of indri is urgently needed to help decision-making regarding the conservation of this iconic species. We monitored and followed the life histories of 68 individually recognizable indris in 10 family groups in the Maromizaha New Protected Area (Madagascar) for 12 years. We estimated age and sex-specific survival trajectories using a Bayesian hierarchical survival model and found that the survival curves for male and female indris show a similar pattern, consistent with what found typically in primates; i.e., a high infant mortality rate which declines with age in the juvenile phase and increases again for adults. Also, life expectancies at 2 years of age (e2), were found to be similar between the sexes (e2 females = 7.8 years; e2 males = 7.5 years). We suggest that the lack of strong differences in the survival patterns for male and female indris are related to the strictly monogamous mating system and the lack of sexual dimorphism in this species. Our study provides, for the first time, robust estimates for demographic parameters of indris and one of the very few datasets on survival trajectories available for primates.
-
Principal Component Analysis as a Novel Method for the Assessment of the Enclosure Use Patterns of Captive Livingstone’s Fruit Bats (Pteropus livingstonii)The Spread of Participation Index (SPI) is a standard tool for assessing the suitability of enclosure design by measuring how captive animals access space. This metric, however, lacks the precision to quantify individual-level space utilization or to determine how the distribution of resources and physical features within an enclosure might influence space use. Here we demonstrate how Principal Component Analysis (PCA) can be employed to address these aims and to therefore facilitate both individual-level welfare assessment and the fine-scale evaluation of enclosure design across a range of captive settings. We illustrate the application of this methodology by investigating enclosure use patterns of the Livingstone’s fruit bat (Pteropus livingstonii) population housed at Jersey Zoo. Focal sampling was used to estimate the time each of 44 individuals in the first data collection period and 50 individuals in the second period spent in each of 42 theoretical enclosure sections. PCA was then applied to reduce the 42 sections to five and seven ecologically relevant “enclosure dimensions” for the first and second data collection periods respectively. Individuals were then assigned to the dimension that most accurately represented their enclosure use patterns based on their highest dimensional eigenvalue. This assigned dimension is hereafter referred to as the individual’s Enclosure Use Style (EUS). Sex was found to be significantly correlated with an individual’s EUS in the second period, whilst age was found to significantly influence individual fidelity to assigned EUS. When assessing the effect of resource location on group-level preference for certain sections, the presence of feeders and proximity to public viewing areas in period one, and feeders and heaters in period two, were positively correlated with space use. Finally, individual EUS remained consistent between both data collection periods. We interpret these results for this species in the context of its observed behavioural ecology in the wild and evaluate the degree to which the current captive enclosure for this population allows for optimal individual welfare through the facilitation of spatial choice. We then explore how these methods could be applied to safeguard captive animal welfare across a range of other scenarios.
-
Non-territorial GPS-tagged golden eagles Aquila chrysaetos at two Scottish wind farms: avoidance influenced by preferred habitat distribution, wind speed and blade motion statusWind farms can have two broad potential adverse effects on birds via antagonistic processes: displacement from the vicinity of turbines (avoidance), or death through collision with rotating turbine blades. These effects may not be mutually exclusive. Using detailed data from 99 turbines at two wind farms in central Scotland and thousands of GPS-telemetry data from dispersing golden eagles, we tested three hypotheses. Before-and-after-operation analyses supported the hypothesis of avoidance: displacement was reduced at turbine locations in more preferred habitat and with more preferred habitat nearby. After-operation analyses (i.e. from the period when turbines were operational) showed that at higher wind speeds and in highly preferred habitat eagles were less wary of turbines with motionless blades: rejecting our second hypothesis. Our third hypothesis was supported, since at higher wind speeds eagles flew closer to operational turbines; especially – once more – turbines in more preferred habitat. After operation, eagles effectively abandoned inner turbine locations, and flight line records close to rotor blades were rare. While our study indicated that whole-wind farm functional habitat loss through avoidance was the substantial adverse impact, we make recommendations on future wind farm design to minimise collision risk further. These largely entail developers avoiding outer turbine locations which are in and surrounded by swathes of preferred habitat. Our study illustrates the insights which detailed case studies of large raptors at wind farms can bring and emphasises that the balance between avoidance and collision can have several influences.
-
Responses of dispersing GPS-tagged Golden Eagles (Aquila chrysaetos) to multiple wind farms across ScotlandWind farms may have two broad potential adverse effects on birds via antagonistic processes: displacement from the vicinity of turbines (avoidance), or death through collision with rotating turbine blades. Large raptors are often shown or presumed to be vulnerable to collision and are demographically sensitive to additional mortality, as exemplified by several studies of the Golden Eagle Aquila chrysaetos. Previous findings from Scottish Eagles, however, have suggested avoidance as the primary response. Our study used data from 59 GPS-tagged Golden Eagles with 28 284 records during natal dispersal before and after turbine operation < 1 km of 569 turbines at 80 wind farms across Scotland. We tested three hypotheses using measurements of tag records’ distance from the hub of turbine locations: (1) avoidance should be evident; (2) older birds should show less avoidance (i.e. habituate to turbines); and (3) rotor diameter should have no influence (smaller diameters are correlated with a turbine’s age, in examining possible habituation). Four generalized linear mixed models (GLMMs) were constructed with intrinsic habitat preference of a turbine location using Golden Eagle Topography (GET) model, turbine operation status (before/after), bird age and rotor diameter as fixed factors. The best GLMM was subsequently verified by k-fold cross-validation and involved only GET habitat preference and presence of an operational turbine. Eagles were eight times less likely to be within a rotor diameter’s distance of a hub location after turbine operation, and modelled displacement distance was 70 m. Our first hypothesis expecting avoidance was supported. Eagles were closer to turbine locations in preferred habitat but at greater distances after turbine operation. Results on bird age (no influence to 5+ years) rejected hypothesis 2, implying no habituation. Support for hypothesis 3 (no influence of rotor diameter) also tentatively inferred no habituation, but data indicated birds went slightly closer to longer rotor blades although not to the turbine tower. We proffer that understanding why avoidance or collision in large raptors may occur can be conceptually envisaged via variation in fear of humans as the ‘super predator’ with turbines as cues to this life-threatening agent.
-
Evaluation of the Feasibility, Reliability, and Repeatability of Welfare Indicators in Free-Roaming Horses: A Pilot Study.Validated assessment protocols have been developed to quantify welfare states for intensively managed sport, pleasure, and working horses. There are few protocols for extensively managed or free-roaming populations. Here, we trialed welfare indicators to ascertain their feasibility, reliability, and repeatability using free-roaming Carneddau Mountain ponies as an example population. The project involved (1) the identification of animal and resource-based measures of welfare from both the literature and discussion with an expert group; (2) testing the feasibility and repeatability of a modified body condition score and mobility score on 34 free-roaming and conservation grazing Carneddau Mountain ponies; and (3) testing a prototype welfare assessment template comprising 12 animal-based and 6 resource-based welfare indicators, with a total of 20 questions, on 35 free-roaming Carneddau Mountain ponies to quantify inter-assessor reliability and repeatability. This pilot study revealed that many of the indicators were successfully repeatable and had good levels of inter-assessor reliability. Some of the indicators could not be verified for reliability due to low/absent occurrence. The results indicated that many animal and resource-based indicators commonly used in intensively managed equine settings could be measured in-range with minor modifications. This study is an initial step toward validating a much-needed tool for the welfare assessment of free-roaming and conservation grazing ponies.
-
A Global Survey of Current Zoo Housing and Husbandry Practices for Fossa: A Preliminary ReviewThe fossa is a specialized Malagasy carnivore housed in ex situ facilities since the late 19th century. Moderate breeding success has occurred since the 1970s, and welfare issues (notably stereotypic pacing behaviour) are commonly documented. To understand challenges relating to fossa housing and husbandry (H) across global facilities and to identify areas of good practice that dovetail with available husbandry standards, a survey was distributed to ZIMS-registered zoos in 2017. Results showed that outdoor housing area and volume varied greatly across facilities, the majority of fossa expressed unnatural behaviours, with pacing behaviour the most frequently observed. All fossa received enrichment, and most had public access restricted to one or two sides of the enclosure. The majority of fossa were locked in/out as part of their daily management and forty-one percent of the fossa surveyed as breeding individuals bred at the zoo. Dense cover within an enclosure, restricted public viewing areas, a variable feeding schedule and limited view of another species from the fossa exhibit appear to reduce the risk of unnatural behavior being performed. The achievement of best practice fossa husbandry may be a challenge due to its specialized ecology, the limited wild information guiding captive care, and the range of housing dimensions and exhibit features provided by zoos that makes identification of standardized practices difficult. We recommended that holders evaluate how and when enrichment is provided and assess what they are providing for environmental complexity as well as consider how the public views their fossa.
-
Evaluation of the Feasibility, Reliability, and Repeatability of Welfare Indicators in Free-Roaming Horses: A Pilot StudyValidated assessment protocols have been developed to quantify welfare states for intensively managed sport, pleasure, and working horses. There are few protocols for extensively managed or free-roaming populations. Here, we trialed welfare indicators to ascertain their feasibility, reliability, and repeatability using free-roaming Carneddau Mountain ponies as an example population. The project involved (1) the identification of animal and resource-based measures of welfare from both the literature and discussion with an expert group; (2) testing the feasibility and repeatability of a modified body condition score and mobility score on 34 free-roaming and conservation grazing Carneddau Mountain ponies; and (3) testing a prototype welfare assessment template comprising 12 animal-based and 6 resource-based welfare indicators, with a total of 20 questions, on 35 free-roaming Carneddau Mountain ponies to quantify inter-assessor reliability and repeatability. This pilot study revealed that many of the indicators were successfully repeatable and had good levels of inter-assessor reliability. Some of the indicators could not be verified for reliability due to low/absent occurrence. The results indicated that many animal and resource-based indicators commonly used in intensively managed equine settings could be measured in-range with minor modifications. This study is an initial step toward validating a much-needed tool for the welfare assessment of free-roaming and conservation grazing ponies.
-
Assessing the behaviour, welfare and husbandry of mouse deer (Tragulus spp.) in European zoosMouse deer are primitive, forest ungulates found in Asia and Africa. Both the lesser mouse deer (Tragulus javanicus) and the Philippine mouse deer (T. nigricans) are managed in European zoos, but inconsistent breeding success between institutions, high neonatal mortality and a general lack of research on their husbandry and behaviour were identified by the coordinators of the European Endangered Species Programme (EEP) and the European Studbook (ESB) for each species, respectively. This study is the first to provide a behavioural description for the Philippine mouse deer and to compile a detailed behavioural repertoire for both species. Our aim was to identify the effects of current husbandry and management practices on the reproduction, behaviour and welfare of zoo-housed mouse deer. Questionnaires on husbandry and management practices were sent to all institutions in the EEP and ESB for the lesser and Philippine mouse deer, respectively, and behavioural data were collected in 15 of these zoos. For the lesser mouse deer, results show a positive effect of vegetation cover on breeding success, foraging and moving behaviours. The provision of enrichment and presence of water ponds also positively affected these behaviours. The time that pairs spent in close proximity had a negative effect on breeding success, but animals in more vegetated enclosures spent less time in close proximity to each other. Results could be partially explained by the natural habitat of this usually solitary species being tropical forest, which provides local water sources and undergrowth for cover from predators. For the Philippine mouse deer there were differences in activity measures recorded between zoos, but the sample size was small with differences in training, enrichment and vegetation cover likely to have been important. In conclusion, since mouse deer inhabit overlapping male and female territories, the usual practice of housing breeding pairs together may be appropriate, but we suggest that they should be provided with opportunities to avoid each other in complex enclosures with ample vegetation cover to maximise their natural behavioural repertoire and breeding success.
-
Street-level green spaces support a key urban population of the threatened Hispaniolan Parakeet Psittacara chloropterusWhile urbanisation remains a major threat to biodiversity, urban areas can sometimes play an important role in protecting threatened species, especially exploited taxa such as parrots. The Hispaniolan Parakeet Psittacara chloropterus has been extirpated across much of Hispaniola, including from most protected areas, yet Santo Domingo (capital city of the Dominican Republic) has recently been found to support the island’s densest remaining population. In 2019, we used repeated transects and point-counts across 60 1 km2 squares of Santo Domingo to examine the distribution of parakeets, identify factors that might drive local presence and abundance, and investigate breeding ecology. Occupancy models indicate that parakeet presence was positively related to tree species richness across the city. N-Mixture models show parakeet encounter rates were correlated positively with species richness of trees and number of discrete ‘green’ patches (> 100 m2) within the survey squares. Hispaniolan Woodpecker Melanerpes striatus, the main tree-cavity-producing species on Hispaniola, occurs throughout the city, but few parakeet nests are known to involve the secondary use of its or other cavities in trees/palms. Most parakeet breeding (perhaps 50‒100 pairs) appears to occur at two colonies in old buildings, and possibly only a small proportion of the city’s 1,500+ parakeets that occupy a single roost in street trees breed in any year. Our models emphasise the importance of parks and gardens in providing feeding resources for this IUCN Vulnerable species. Hispaniola’s urban centres may be strongholds for populations of parakeets and may even represent sources for birds to recolonise formerly occupied areas on the island.
-
Social Network Analysis of small social groups: application of a hurdle GLMM approach in the Alpine marmot (Marmota marmota)Social Network Analysis (SNA) has recently emerged as a fundamental tool to study animal behavior. While many studies have analyzed the relationship between environmental factors and behavior across large, complex animal populations, few have focused on species living in small groups due to limitations of the statistical methods currently employed. Some of the difficulties are often in comparing social structure across different sized groups and accounting for zero-inflation generated by analyzing small social units. Here we use a case study to highlight how Generalized Linear Mixed Models (GLMMs) and hurdle models can overcome the issues inherent to study of social network metrics of groups that are small and variable in size. We applied this approach to study aggressive behavior in the Alpine marmot (Marmota marmota) using an eight-year long dataset of behavioral interactions across 17 small family groups (7.4 ± 3.3 individuals). We analyzed the effect of individual and group-level factors on aggression, including predictors frequently inferred in species with larger groups, as the closely related yellow-bellied marmot (Marmota flaviventris). Our approach included the use of hurdle GLMMs to analyze the zero-inflated metrics that are typical of aggressive networks of small social groups. Additionally, our results confirmed previously reported effects of dominance and social status on aggression levels, thus supporting the efficacy of our approach. We found differences between males and females in terms of levels of aggression and on the roles occupied by each in agonistic networks that were not predicted in a socially monogamous species. Finally, we provide some perspectives on social network analysis as applied to small social groups to inform subsequent studies.
-
The role of brain size on mammalian population densities1. The local abundance or population density of different organisms often varies widely. Understanding what determines this variation is an important, but not yet fully resolved question in ecology. Differences in population density are partly driven by variation in body size and diet among organisms. Here we propose that the size of an organism’ brain could be an additional, overlooked, driver of mammalian population densities. 2. We explore two possible contrasting mechanisms by which brain size, measured by its mass, could affect population density. First, because of the energetic demands of larger brains and their influence on life history, we predict mammals with larger relative brain masses would occur at lower population densities. Alternatively, larger brains are generally associated with a greater ability to exploit new resources, which would provide a competitive advantage leading to higher population densities among large‐brained mammals. 3. We tested these predictions using phylogenetic path analysis, modelling hypothesized direct and indirect relationships between diet, body mass, brain mass and population density for 656 non‐volant terrestrial mammalian species. We analysed all data together and separately for marsupials and the four taxonomic orders with most species in the dataset (Carnivora, Cetartiodactyla, Primates, Rodentia). 4. For all species combined, a single model was supported showing lower population density associated with larger brains, larger bodies and more specialized diets. The negative effect of brain mass was also supported for separate analyses in Primates and Carnivora. In other groups (Rodentia, Cetartiodactyla and marsupials) the relationship was less clear: supported models included a direct link from brain mass to population density but 95% confidence intervals of the path coefficients overlapped zero. 5. Results support our hypothesis that brain mass can explain variation in species’ average population density, with large‐brained species having greater area requirements, although the relationship may vary across taxonomic groups. Future research is needed to clarify whether the role of brain mass on population density varies as a function of environmental (e.g. environmental stability) and biotic conditions (e.g. level of competition).
-
Contrasting responses to salinity and future ocean acidification in arctic populations of the amphipod Gammarus setosusClimate change is leading to alterations in salinity and carbonate chemistry in arctic/sub-arctic marine ecosystems. We examined three nominal populations of the circumpolar arctic/subarctic amphipod, Gammarus setosus, along a salinity gradient in the Kongsfjorden-Krossfjorden area of Svalbard. Field and laboratory experiments assessed physiological (haemolymph osmolality and gill Na+/K+-ATPase activity, NKA) and energetic responses (metabolic rates, MO2, and Cellular Energy Allocation, CEA). In the field, all populations had similar osmregulatory capacities and MO2, but lower-salinity populations had lower CEA. Reduced salinity (S = 23) and elevated pCO2 (~1000 μatm) in the laboratory for one month increased gill NKA activities and reduced CEA in all populations, but increased MO2 in the higher-salinity population. Elevated pCO2 did not interact with salinity and had no effect on NKA activities or CEA, but reduced MO2 in all populations. Reduced CEA in lower-rather than higher-salinity populations may have longer term effects on other energy demanding processes (growth and reproduction).
-
The long-term impact of infant rearing background on the behavioural and physiological stress response of adult common marmosets (Callithrix jacchus)Although triplet litters are increasing in captive colonies of common marmosets, parents can rarely rear more than two infants without human intervention. There is however much evidence that early life experience, including separation from the family, can influence both vulnerability and resilience to stress. The current study investigated the behavioural and hypothalamic pituitary adrenal (HPA) axis response to the routine stressor of capture and weighing in adult common marmosets (Callithrix jacchus), reared as infants under 3 different conditions: family-reared twins (n = 6 individuals), family-reared animals from triplet litters where only 2 remain (2stays: n = 8) and triplets receiving supplementary feeding from humans (n = 7). In the supplementary feeding condition, infants remained in contact with each other when they were removed from the family. There were no significant differences (P > 0.5) in cortisol level or behaviour between the rearing conditions. In all conditions, salivary cortisol decreased from baseline to post-capture, which was accompanied by increases in agitated locomotion. Family reared 2stays demonstrated significant cortisol decreases from baseline to post capture (post 5 min.: P = 0.005; post 30 min.: P = 0.018), compared to the other conditions. Family reared twins displayed significantly more behavioural changes following the stressor than the other conditions, including significant increases in scent marking (post 5 min. and post 30 min.: P = 0.028) and significant decreases in inactive alert (post 5 min.: P = 0005; post 30 min.: P = 0.018), calm locomotion (post 5 min.: P = 0.028; post 30 min.: P = 0.046) and proximity to partner (post 5 min.: P = 0.046). There were increases in behaviour suggesting reduced anxiety, including significantly more exploration post-capture in supplementary fed triplets (post 5 min.: P = 0.041), and significantly more foraging post capture in family reared 2stays (post 5 min. and post 30 min.: P = 0.039). However, as differences between rearing conditions were minimal, supplementary feeding of large litters of marmosets at this facility did not have a major effect on stress vulnerability, suggesting that this rearing practice may be the preferred option if human intervention is necessary to improve survival of large litters.
-
Effects of a no-take reserve on mangrove fish assemblages: incorporating seascape 2 connectivityNo-take reserves (NTRs) have been effective at conserving fish assemblages in tropical systems such as coral reefs, but have rarely been evaluated in turbid tropical estuaries. The present study evaluated the effect of a mangrove NTR on the conservation of juvenile fish abundance, commercial fish biomass and biodiversity at the assemblage level, and the abundance of juveniles, target and non-target adults at the family level. The evaluation incorporated one aspect of seascape connectivity, namely proximity to the sea, or in this case, the Gulf of Paria. Linear mixed models showed that the NTR had a positive effect only on species richness at the assemblage level. However, juvenile fish abundance, commercial fish biomass, taxonomic distinctness and functional diversity were not enhanced in the NTR. The inclusion of connectivity in these models still failed to identify any positive effects of the NTR at the assemblage level. Yet, there were significant benefits to juvenile fish abundance for 5 of 7 families, and for 1 family of non-target adults. Possible explanations for the limited success of the NTR for fish assemblages include failing to account for the ecology of fish species in NTR design, the drawbacks of ‘inside−outside’ (of the NTR) experimental designs and the fact that fishing does not always impact non-target species. It is important to recognise that mangrove NTRs do not necessarily benefit fish assemblages as a whole, but that finer-scale assessments of specific families may reveal some of the proclaimed benefits of NTRs in tropical estuaries. | https://chesterrep.openrepository.com/handle/10034/622973 |
Predator cues and diet, when studied separately, have been shown to affect body shape of organisms. Previous studies show that the morphological responses to predator absence/presence and diet may be similar, and hence could confound the interpretation of the causes of morphological differences found between groups of individuals. In this study, we simultaneously examined the effect of these two factors on body shape and performance in crucian carp in a laboratory experiment. Crucian carp (Carassius carassius) developed a shallow body shape when feeding on zooplankton prey and a deep body shape when feeding on benthic chironomids. In addition, the presence of chemical cues from a pike predator affected body shape, where a shallow body shape was developed in the absence of pike and a deep body shape was developed in the presence of pike. Foraging activity was low in the presence of pike cues and when chironomids were given as prey. Our results thereby suggest that the change in body shape could be indirectly mediated through differences in foraging activity. Finally, the induced body shape changes affected the foraging efficiency, where crucians raised on a zooplankton diet or in the absence of pike cues had a higher foraging success on zooplankton compared to crucian raised on a chironomid diet or in the presence of pike. These results suggest that body changes in response to predators can be associated with a cost, in terms of competition for resources.
This thesis deals with the evolution of individuals within a species adapted to utilize specific resources, i.e. resource polymorphism. Although a well-known phenomenon, the understanding of the mechanisms behind is not complete. Considering the ruling theories, resource polymorphism is suggested to depend on severe competition for resources, the presence of open niches to be occupied leading to a reduction in competition, and disruptive selection where generalist are out-competed due trade-offs in foraging efficiency for different prey. In order to study resource polymorphism, I have used fish as the animal group in focus and the methods I have used range over laboratory experiments, field experiments, literature surveys and theoretical modelling.
In my work, I have showed that different resource use induces different body shapes and that the rate of change is dependent of the encounter rate of different resources. The induced body changes partly led to increased foraging efficiency but surprisingly I did not find any trade-offs due to specialization. However, when studying predation risk in relation to resource polymorphism, my studies point towards that resource use and predation risk may act as balancing factors in such a way that disruptive selection can take place.
My work also shows that population feedbacks have to be explored when considering the evolution of resource polymorphism. In pond and field experiments, I found that changes in resource densities affected the actual resource use despite previous adaptations to certain resources. By performing a literature survey, I found that cannibalism indirectly by its effect on population dynamics seems to facilitate the evolution of resource polymorphism. Modelling a size-structured population, I found that resource dynamics were stabilized, and the relative availability of different resources was levelled out due to cannibalism.
Taken together, my studies strongly suggest that to understand the development of resource polymorphism in consumer populations, future studies have to include the effect of a dynamic environment both with respect to resources and predators. | http://umu.diva-portal.org/smash/record.jsf?pid=diva2:145896 |
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
Small animals vulnerable to predation, such as rodents, have a strong preference for sites that provide physical protection from predators. This is likely to affect not only their use of space and activity but also the ease with which they can defend a territory, since the likelihood of encountering (or losing) intruders and their willingness to compete are affected by the quality and distribution of resources and the structural complexity of the habitat. To examine how these different habitat factors interact to influence territorial behaviour in male house mice, Mus domesticus, which inhabit environments with very different levels of complexity and resource distribution, we housed male-female pairs in enclosures representing one of eight habitat types varying in ground-level structure (open/complex), overhead cover (present/absent) and distribution of protected nest sites and food (resources clumped together/scattered). Neighbouring pairs were allowed to interact five times over 3 days and we examined behaviour during the first (unfamiliar) and fifth (familiar) periods. Initially, encounter rates were two to three times higher in open habitats with overhead cover than in either complex habitats or open habitats without cover, and higher when resources were scattered than when they were clumped. Aggressive interactions between unfamiliar males were more prolonged in habitats with open ground-level structure, where pursuits followed restricted pathways. The effects of overhead cover on aggression among unfamiliar neighbours unexpectedly depended on the origin of the mice. Once neighbours learnt the outcome of their interactions, aggressive interactions were most prolonged in habitats with scattered resources and complex ground-level structure, making these habitats the most difficult to defend. © 2004 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved. | https://eprints.ncl.ac.uk/71323 |
Intrinsic and extrinsic factors interact to create unique selection pressures that influence the behavior of individuals both within and among populations. Intrinsic differences in body size, age, sex, and reproductive status can contribute to behavioral variation among conspecifics. Environmental influences include predation pressure, resource availability, and abiotic variables such as temperature. I examined paedomorphic Oklahoma salamanders (Eurycea tynerensis) from two spatially segregated populations in Missouri and Oklahoma to determine whether antipredator behavior, swimming speed, activity, and proximity to cover were influenced by sex, reproductive condition, or population of origin. The two populations differed in several respects, with individuals from the Missouri population being generally more active (especially during the daytime), with higher swimming speeds (especially males) and shorter latencies to strike at prey. Predation risk was simulated by exposing salamanders to chemical stimuli from a benthic fish predator (Ozark sculpin, Cottus hypselurus). Salamanders from both populations were more likely to swim to the surface and altered their latencies to move following exposure to predatory threat from sculpin. Latency differences were affected by sex; females increased while males decreased latency to move when exposed to threatening stimuli. Non-gravid females from the Oklahoma population showed a lower affinity for cover than either males or gravid females. The behavioral differences between populations and among sex classes likely reflect both intrinsic and extrinsic factors that influence fitness trade-offs between reproduction, foraging, and antipredator behavior.
Copyright
© Lauren Joyce Rudolph
Recommended Citation
Rudolph, Lauren Joyce, "Variation in Behavior of Different Populations and Sex Classes of Paedomorphic Oklahoma Salamanders" (2015). MSU Graduate Theses. 1341. | https://bearworks.missouristate.edu/theses/1341/ |
Untangling the influences of fire, habitat and introduced predators on the endangered heath mouse.
Abstract
Globally, species extinctions are driven by multiple interacting factors including altered fire regimes and introduced predators. In flammable ecosystems, there is great potential to use fire for animal conservation, however most fire-based conservation strategies do not explicitly consider interacting factors. In this study, we sought to understand the interrelationships between the endangered heath mouse
Pseudomys shortridgei
, fire, resource availability and the introduced fox
Vulpes vulpes
in southeast Australia. We predicted that heath-mouse relative abundance would respond indirectly to post-fire age class (recently burnt; 0-3 years since fire, early; 4-9 years, mid; 10-33 years and late; 34-79 years) via the mediating effects of resources (shrub cover and plant-group diversity) and fox relative abundance. We used structural equation modelling to determine the strength of hypothesized pathways between variables, and mediation analysis to detect indirect effects. Both the cover of shrubs (0-50 cm from the ground) and fox relative abundance were associated with post-fire age class. Shrub cover was highest 0-9 years after fire, while fox relative abundance was highest in recently burnt vegetation (0-3 years after fire). Heath mice were positively correlated with shrub cover and plant-group diversity, and negatively correlated with fox relative abundance. We did not detect a direct relationship between heath mice and post-fire age class, but they were indirectly associated with age class via its influence on both shrub cover and fox relative abundance. Our findings suggest that heath mice will benefit from a fire regime promoting dense shrub regeneration in combination with predator control. Understanding the indirect effects of fire on animals may help to identify complementary management practices that can be applied concurrently to benefit vulnerable species. Analytical and management frameworks that include multiple drivers of species abundance and explicitly recognize the indirect effects of fire regimes will assist animal conservation.
Top of page
Abstract details
Author(s)
Nalliah, R.
;
Sitters, H.
;
Smith, A.
;
Stefano, J. di
Author Affiliation
School of Ecosystem and Forest Sciences, The University of Melbourne, 4 Water Street, Creswick, VIC 3363, Australia. | https://www.cabi.org/isc/abstract/20220208932 |
Czarnecka, M., Kakareko, T., Jermacz, Ł., Pawlak, R., & Kobak, J. (2019). Combined effects of nocturnal exposure to artificial light and habitat complexity on fish foraging. Science of The Total Environment, 684, 14–22.
|
Abstract: Due to the widespread use of artificial light, freshwater ecosystems in urban areas at night are often subjected to light of intensities exceeding that of the moonlight. Nocturnal dim light could modify fish behaviour and benefit visual predators because of enhanced foraging success compared to dark nights. However, effects of nocturnal light could be mitigated by the presence of structured habitats providing refuges for prey. We tested in laboratory experiments whether nocturnal light of low intensity (2 lx) increases foraging efficiency of the Eurasian perch (Perca fluviatilis) on invertebrate prey (Gammarus fossarum). The tests were conducted at dusk and night under two light regimes: natural cycle with dark nights and disturbed cycle with artificially illuminated nights, in habitats differing in structural complexity: sand and woody debris. We found that nocturnal illumination significantly enhanced the consumption of gammarids by fish compared to dark nights. In addition, the perch was as effective predator in illuminated nights (2 lx) as at dusk (10 lx). Woody debris provided an effective refuge only in combination with undisturbed darkness, but not in illuminated nights. Our results suggest that nocturnal illumination in aquatic ecosystems may contribute to significant reductions in invertebrate population sizes through fish predation. The loss of darkness reduces the possibility of using shelters by invertebrates and hence the effects of elevated light levels at night could not be mitigated by an increased habitat complexity.
Keywords: Animal; fishes; Perca fluviatilis; Gammarus fossarum; gammarids; aquatic ecosystems
|
Pu, G., Zen, D., Mo, L., He, W., Zhou, L., Huang, K., et al. (2019). Does artificial light at night change the impact of silver nanoparticles on microbial decomposers and leaf litter decomposition in streams? Environ. Sci.: Nano, 6, 1728–1739.
|
Abstract: The toxic effects of silver nanoparticles (AgNP) to aquatic species and ecosystem processes have been the focus of increasing research in ecology, but their effects under different environmental stressors, such as the ongoing anthropogenic artificial light at night (ALAN) which can cause a series of ecological effects and will potentially interact with other stressors, remain poorly understood. Here, we aimed to assess the combined effects of AgNP and ALAN on the activities and community structure of fungi and bacteria associated to plant litter in a stream. The results showed that ALAN not only led to changes in the average hydrodynamic diameter, ζ-potential and dissolved concentration of AgNP but also inhibited the enzyme activities of leucine-aminopeptidase (LAP), polyphenol oxidase (PPO) and peroxidase (PER) associated to microbes involved in litter decomposition. The negative effect of AgNP on the decomposition of Pterocarya stenoptera leaf litter was alleviated by ALAN owing to the reduction of Ag+ concentration in the microcosm and lignin content in the leaf litter in the A-AgNP treatments, the enhancement of β-glucosidase (β-G) activities and the increase of microbial biomass. The effect of ALAN alone or combined with AgNP or AgNO3 on the taxonomic composition of fungi was much greater than that on bacteria. Linear discriminant analysis effect size (LEfSe) demonstrated that each treatment had its own fungal and bacterial indicator taxa, from the phylum to genus levels, indicating that the microbial communities associated with litter decomposition can change their constituent taxa to cope with different stressors. These results reveal that ALAN can decrease the toxicity of AgNP and highlight the importance of considering ALAN during the assessment of the risk posed by nanoparticles to freshwater biota and ecosystem processes. | http://alandb.darksky.org/search.php?sqlQuery=SELECT%20author%2C%20title%2C%20type%2C%20year%2C%20publication%2C%20abbrev_journal%2C%20volume%2C%20issue%2C%20pages%2C%20keywords%2C%20abstract%2C%20thesis%2C%20editor%2C%20publisher%2C%20place%2C%20abbrev_series_title%2C%20series_title%2C%20series_editor%2C%20series_volume%2C%20series_issue%2C%20edition%2C%20language%2C%20author_count%2C%20online_publication%2C%20online_citation%2C%20doi%2C%20serial%2C%20area%20FROM%20refs%20WHERE%20keywords%20RLIKE%20%22aquatic%20ecosystems%22%20ORDER%20BY%20first_author%2C%20author_count%2C%20author%2C%20year%2C%20title&client=&formType=sqlSearch&submit=Cite&viewType=Print&showQuery=0&showLinks=0&showRows=5&rowOffset=&wrapResults=1&citeOrder=&citeStyle=APA&exportFormat=RIS&exportType=html&exportStylesheet=&citeType=html&headerMsg= |
Africa: Even Bushbabies Get Stressed – Here’s How We Know, and What It Means
Many South Africans will be familiar with bushbabies – or, at least, with their distinctive call. The small animal, more formally known as the thick-tailed greater galago, takes its common name from that call; it sounds like a crying baby.
Bushbabies are primates. They have large eyes and are nocturnal creatures. They’re usually spotted meandering through tall trees at night in search of fruit to eat.
Very little research has been conducted about bushbabies in South Africa since the 1980s, partly because they are not gregarious or easy to observe. And almost nothing is known about what physiological mechanisms they and other African primates use to cope with environmental and social changes. Climate change and human encroachment on their habitat, for example, may affect their food sources, their reproductive success, and possibly their survival. We set out to help fill in this knowledge gap.
Our study explored the main factors that contribute to changes in bushbabies’ physiology. These included the influences of diet, weather and their reproductive state. We tested their adrenocortical activity (that is, the hormones they secreted) across a 12-month period at the Lajuma Research Centre in the Soutpansberg mountains in South Africa’s Limpopo province.
When an animal is exposed to any form of change – for instance predator exposure, temperature changes, or mating – their physiological stress response is activated and the glucocorticoid hormone is secreted. Glucocorticoid hormones play a part in numerous mechanisms in the body. Their primary functions include growth, the maintenance of energy requirements, and the immune and stress responses. An acute secretion of glucocorticoids is healthy. But long-term exposure can have detrimental effects on the body: immunity and reproductive capabilities, for example, may be reduced. So there’s a lot to learn from an animal’s long-term glucocorticoid patterns.
Over the 12 months of our study, we identified the main factors that affect bushbabies’ responses to changes in the environment. We found that female bushbabies were more susceptible than males to elevated glucocorticoid levels brought on by environmental changes. This may have implications for the species’ longer term ability to adapt to dwindling food availability or a shifting climate, for instance.
Hormonal changes
When an animal is exposed to some kind of change, glucocorticoids are released into the bloodstream to reach their target organ or tissue. After this they are broken down in the liver to create glucocorticoid metabolites or by-products; these metabolites are then excreted from the body. That means researchers can study animals’ faeces as a proxy to monitor adrenocortical activity.
This method of sampling has become popular in science. It requires little direct interaction with an animal, minimising the risk of stress or injury.
The Lajuma Research Centre consists of a variety of habitats including mist-belt forests and savannah grassland. Temperatures range from 37°C in summer to 0°C in winter and rain falls in the summer months. Living in this highly seasonal environment, bushbabies need to withstand fluctuations in food availability. The fruit, insects and gum they eat aren’t as abundant in winter.
They also experience constantly changing social interactions. These are generally solitary animals, but each year they must interact during the mating season or females must look after their offspring during the lactating period.
To survive, the species should be adaptive, or “plastic”: hormonal fluctuations should occur, but there shouldn’t be consistently high concentrations of glucocorticoids.
We started by establishing which assay or “hormone detector” would be most accurate in detecting the metabolites of this species. Then, to explore the effect of seasonal and social factors on metabolite levels, we collected faecal samples from wild individuals over an entire year. Animals were captured using traps – we’ve been trapping these individuals since 2013 for different studies and they kept coming back for the free food. They were identified, weighed, and released, and we collected the faeces that had accumulated in the traps.
We also determined seasonal food availability by taking tree gum samples and counting the available insects and the seeds in faecal samples.
The results revealed that males did not have a significant change in faecal glucocorticoid metabolite levels across seasons or during important social events such as mating. This was unexpected: we had predicted that the mating season, combined with the less favourable winter conditions, would cause a dramatic rise in levels. We suspect the galagos adjust their behaviour to reduce their activities and, thus, their energy use and glucocorticoid secretion during the colder months.
We determined that the lactation period had the greatest impact on female galagos’ glucocorticoid levels. Lactation uses a lot of energy and has been shown to cause increases in glucocorticoid concentrations in several other primate species. It could also be an influence of “maternal stress” when the mother must care for her offspring.
We found that changes in food availability influenced females’ glucocorticoid concentrations. The lactation period is in summer, when food is amply available. This could create higher levels of competition between individuals. Previous research on lesser galagos has shown females may elevate aggression during times of high food availability and periods when they need to look after their young. Altogether, these factors could have caused the rise in faecal glucocorticoid metabolite concentrations.
Final conclusions
Our study shows that faecal glucocorticoid metabolite concentrations are most affected by food availability and reproductive state. Females are more likely to experience higher concentrations because of the physiological costs of reproduction. The results suggest they are more sensitive to environmental change than males. Overall, bushbabies are fairly resilient to change – for now.
This information will help to contextualise future research on the impact of environmental change, human environmental degradation and especially climate change, which may have an impact on the survival of this primate species.
Professor Adrian Tordiffe, Professor, Michelle Sauther, Professor Frank Cuozzo, Dr James Millette, Professor Andre Ganswindt and Dr Juan Scheun co-authored the research this article is based on. | https://indiatips.in/2022/01/08/africa-even-bushbabies-get-stressed-heres-how-we-know-and-what-it-means/ |
We believe the intersection of architecture, design, and technology must center on people.
Human experiences have been and always will be the cornerstone of our profession. The digital tool set has enabled us to transcend traditional methods, providing more intuitive and empathetic ways to explore and innovate. Our goal is to transform the design process through empathy, by pioneering new methods of understanding.
HGA’s award-winning Digital Practice Group is organized around advancing the future of practice to create a positive, lasting impact through design. Leveraging a process centered around people and empowered by technology elevates our ability to iterate, understand, and communicate ideas through a widened frame of reference. Automating processes, leveraging data, developing proprietary software, and facilitating a collaborative and nimble environment allows the Digital Practice Group to navigate an ever-changing environment.
The Digital Practice Group is organized around individuals of diverse backgrounds, including architecture, engineering, visualization, and computer programming. The specialized nature of these individuals allows them to address complex problems through collaboration with HGA project teams. This enables HGA to evolve traditional practice, provide innovative solutions, and deliver better results to our clients and their communities.Services
Our expertise includes:
- Computational Design
- Visualization
- Building Information Modeling (BIM)
- Fabrication
REACH Rondo
In the 1960s the Rondo neighborhood in St. Paul, Minnesota was divided by Interstate 94. The REACH Rondo project reimagines the space with a land bridge that would reconnect the Rondo neighborhood.
Computational Design
While designers traditionally rely on intuition and experience to solve design challenges, computational design enhances that process by encoding design decisions using computer language. Our Digital Practice Group works with project teams using visual programming tools such as Dynamo and Grasshopper to offer solutions to a wide array of project challenges. Data curation, performance analysis, machine learning, and generative design give us a unique understanding of the problems our clients are facing. This approach allows us to explore complexity with a curious nature and differentiates our process from traditional practice.
Visualization
Virtual reality (VR), augmented reality (AR), and real-time rendering continue to transform the way we design. Using these tools to design at a one-to-one scale provides clients and building occupants the ability to inhabit a proposed design to better understand adjacencies, workflows, materiality, light, and scale. Our visualization experts allow us to communicate and evaluate design decisions and the impact they have on a space’s inhabitants.
Building Information Modeling (BIM)
HGA’s Digital Practice Group is nationally recognized for its Building Information Modeling (BIM) capability and experience with a diverse array of project types, scales, and complexities. Our BIM experts support project teams by creating an information rich infrastructure that ensures a nimble, iterative design process. This allows the design team and project stakeholders to virtually construct, test, and optimize a building before it’s built.
Fabrication
Our fabrication studio creates high-end models to help illustrate a designer’s vision with greater accuracy. Using in-house machinery, such as 3D printing, CNC milling, laser cutting, and engraving, we are able to quickly iterate and prototype our design solutions. This allows HGA to elevate the quality of our projects while making it easier for all project stakeholders to visualize the design in detail.
The Empathy Effect
The Empathy Effect: Mixed Reality for Design, a virtual reality (VR) experience simulating how people of different ages and abilities move through an environment designed by HGA Architects and Engineers, has earned an Honorable Mention from Architect Magazine’s 11th annual R+D Awards. | https://hga.com/services/digital-practice/ |
We’ve talked about the applications of VR, AR, and other real-time technologies in architecture before. It’s one thing to see 2D stills of a 3D project online, but it’s entirely different when you get to experience a rendering in the format it was designed for. As many of our readers are looking to break into 3D architectural visualization, we’ve gathered this list of seven upcoming conferences where you can see emerging breakthroughs for VR and AR in architecture firsthand.
Academy Days X
Dates: October 3-5, 2019
Location: Venice, Italy
Highlights: 100% focus on Architectural Visualization (ArchViz); Workshops on developing ArchViz presentations in Unreal Engine 4 (UE4)
This event is centered entirely around ArchViz technology and educating practitioners creating ArchViz projects. Epic Games, maker of UE4, recently became the main sponsor of this event hosted by State of Art Academy. This partnership means that there will be a significant focus on the application of UE4, a free software platform, to create immersive architecture projects. The speakers include experts from architects to game designers who can speak to changes in the presentation of architectural project with real-time technologies.
American Institute of Architects (AIA) Conference
Dates: May 14-16, 2020
Location: Los Angeles, CA
Highlights: Unity workshops for AEC; 750+ product manufacturers; continuing education credit hours
AIA’s conference is one of the largest in the industry. This expo serves a means to hear from the top practitioners in the field and get exposure to some of the newest technologies available. In addition to hearing from experts, Unity and other 3D developers will show how BIM data can be optimized with real-time experiences, showcase workflow improvements, and create interactive visual experiences for design presentation.
A19 Recap from AIA Content Team on Vimeo.
AEC Next, in collaboration with Spar 3D Expo and Conference
Dates: June 3-5, 2020
Location: Chicago, IL
Highlights: Collaborative Creation Studio; Launch Pad for start-ups and emerging technology; Unity workshops for AEC
AEC Next is a 3-day educational and networking event designed to bring together thinkers, entrepreneurs, and leaders in the AEC industry. Their goal? To share best practices and emerging technologies in the industry. The conference aims to put companies with new technology solutions in front of industry gatekeepers and provide a forum for them to collaborate.
BIM Cave to be Featured at NECA 2018 in Philadelphia from NECAnet on Vimeo.
VRX Conference and Expo
Dates: December 12-13, 2019
Location: San Francisco, CA
Highlights: Seminar: Design & visualization: Proven ROI for the AEC industry; focus on finding new fields for start-ups
The VRX conference and XRDC (next on our list) are not strictly focused on the AEC industry, but on bringing real-time experiences into new industries, such as AEC. VRX strives to teach studios and designers about trends in the realm of immersive experiences and how they can be used to improve their business. Given its location in Silicon Valley, the conference has a significant focus on educating businesses and investors on tech start-ups working in VR that are seeking business from industries where VR is under-utilized, such as AEC.
XRDC
Dates: October 14-15, 2019
Location: San Francisco, CA
Highlights: Mixed Reality Development for Architecture; Seminar “Show Them the ROI: How AR is Transforming Decades Old Business Processes to Deliver Impressive Results”
While VRX (listed above) is aimed towards linking start-ups with new businesses and investors, XRDC is focused on educating established industry leaders through use cases of immersive technologies like VR and AR. The conference includes leaders from companies as diverse as Airbnb, Apple, Epic Games, Google, Marvel, NASA, Oculus, Pixar, Unity, and Valve. These experts will share how real-time technology is changing their business and the new opportunities that real-time experiences can provide. With an attendance track focused on enterprise applications, there are a wealth of opportunities to learn about how mixed reality media can benefit AEC.
Architecture of the Future
Dates: October 9-11, 2019
Location: Kyiv, Ukraine
Highlights: Day of speakers dedicated to disruptive technologies shaping the new architecture; International perspectives on AEC
Featuring a number of speakers from studios small and large, this conference provides a real deep-dive on technological advances shaping the AEC industry. With a heavy focus on the expo side for vendors, this conference is organized like a summit. The goal is to bring together experts and leaders to talk about technological developments in AEC, featuring speakers focused on VR and the role it can play in presenting AEC projects.
Immersive Architecture Asia
2020 Dates and Location Not Yet Released
2019 Highlights: Workshops on Architectural Model Photogrammetry (AR); VR/AR experience rooms; Focus on AR, VR, MR and AEC
Founded in 2019, the 2020 dates for this conference are not yet live. But the fact that there is a conference entirely focused on VR/AR experiences relevant to the AEC industry is extremely exciting. We think this is a conference to follow because sessions in 2019 focused on allowing AEC and VR professionals to test hardware and software meant for architectural design. | https://blog.moduluc.com/6-aec-conferences-featuring-real-time-technologies-happening-now/ |
Scientific research of the 21st century is highly collaborative. Science teams are getting large and distributed around the world. While this is necessary to solve wicked problems, it poses difficulties for teams due to spatial and temporal dimensions of the teamwork. This project will focus on the design and development of scientific collaboration tools for a principal astrobiology data analysis tool for Mars 2020, NASA's next Mars rover that will be used by a distributed science team. The software will be developed in collaboration with, and delivered to NASA, for use by geologists, geochemists, and astrobiologists participating in the Mars 2020 mission. The application to be developed takes an interactive visual analytics approach and will allow a team of scientists to interact with statistical and machine learning output.
These include building or adapting a version control system to maintain versioning of the science datasets and analysis files; iterative design and development of annotation tools as well as deep linking for seamless collaborative work experiences among scientists. The researcher will conduct talk aloud sessions and eye tracking to investigate how expert and notice scientists interact with the data using the visualisation software.
The project will also investigate using immersive technologies such as AR and VR for collaborative data visualisation; and evaluate how collaboration tools and interactive data visualization help scientists explore their data more effectively, and ultimately gain new insight into data.The Planetary Instrument for X-ray Lithochemistry (PIXL) is a micro-X-ray fluorescence spectrometer selected to fly on the Mars 2020 rover. PIXL is a next-generation instrument designed to map the elemental composition of rocks on a micrometer scale. PIXL, and other instruments like it aboard Mars 2020, generate large volumes of data that will be downlinked and analyzed on compressed decisional timelines. The complex nature of the data and tight tactical timelines have led the PIXL science team to develop several prototype visualization tools. The collaborative features will be integrated into a visualisation software tool that is being designed to enable the mission science team to rapidly and collaboratively analyse data downlinked from contact science instruments.
Research activities
- Conduct background research
- User-centred design
- Designing and running laboratory studies
- Development of a prototype environment (AR/VR)
Outcomes
- A prototype environment (AR/VR) for data visualisation
- Reports on the results of experimental studies
Skills and experience
- Software development
- User experience research
- Teamwork and communication skills
- HCI
Scholarships
You may be able to apply for a research scholarship in our annual scholarship round.
Keywords
Contact
Contact the supervisor for more information. | https://www.qut.edu.au/research/study-with-us/student-topics/topics/designing-and-evaluating-immersive-scientific-visualisation-software |
With increasingly better renderings becoming ubiquitous, students and architects alike feel the pressure of mastering an additional set of skills to get their ideas across. To what extent do renderings make or break a portfolio or a project? How important are they in the design process, and do renderings inform of a particular set of skills besides the software ones? This article explores different perspectives on the role of renderings within the profession.
Attention-grabbing renderings appear to be everywhere, from architectural media to billboards, leaving architects with a strong incentive to try to emulate this type of visualization within their work. However, rendering is a tool that can serve multiple purposes, from storytelling to a strategic communication of skills and intent to the everyday exploration of design options. As digital tools are constantly evolving, architecture needs to experiment with the techniques across an extensive array of design processes, in order to discover where are the most important creative opportunities.
Visualization Artist as a Trade
Related ArticleNot Experienced with Rendering? 4 Techniques you Can Use Instead
In today's hyper-specialized world and with the design becoming increasingly complex, the architect can rarely be a Jack of all trades. From sustainability consultants to BIM managers, the profession relies on the knowledge of actors specialized in a specific area of the design process, with architects being the creators and curators of the overall vision. Therefore, it is worth keeping in mind that many practices turn to seasoned 3D artists to showcase their designs in the most favorable light, especially when it comes to competition renderings and high-stakes projects. These professionals have dedicated countless hours to master not only software but composition, atmospheres, and entourage. This is not to say that one should not work towards developing new skills or that architects can't produce exquisite renderings, but that there is more to it than light settings and material mapping.
Rendering as Language
As Luxigon's founder Eric de Broche des Combes said in an interview with Archdaily, "like hand drawing, the process of making an image is intellectual, not technical". Just like any other form of architectural representation, renderings are a means to convey ideas and concepts. The tradition of Beaux-Arts schools nurtured the artistic quality of architectural drawing, while today's universities organize lectures and entire workshops teaching students how to create images that best explain their projects.
Within the attention economy, projects ultimately compete with each other through images. The better the visualization, the greater the chances of the design to get noticed and gain traction. This is true for both academia and practice, as it is harder to keep a critical eye in the face of compelling, realistic atmospheres. In the words of des Combes again: "believing in what you see leads to a form of acceptance that removes a large part of critical thinking." In this sense, renderings can shift the perception regarding a project independently of its design qualities, and architects need to become more aware of this unconscious bias.
What Renderings Say About their Author
For recent graduates, where in most cases there isn't any significant previous experience to weigh in a prospective job selection process, portfolios are of utmost importance. Furthermore, as there is common knowledge that applications are reviewed in a very short time, it is essential to get the design potential across at a glance, through bold imagery. As a result, some rendering artists organize portfolio reviews on their Youtube channels, giving valuable input to students and young architects.
Surprisingly, in most cases, the critique is not centered on the technical aspects of the renderings but on qualities that are intrinsic to any visualization technique. Composition, color, balanced entourage are universal, and so is storytelling. Architects need to be strategic about how renderings express the overall design process, avoiding inconsequential images. Most importantly, renderings inadvertently tell an experienced viewer of the architect's ability to curate information and distill a concept's essence. Moreover, they offer clues about the knowledge regarding composition and color theory.
Renderings in Everyday Practice
It is in the daily work of an architect that renderings as accurate representation come into play. From precise shadow studies to multiple iterations of architectural details, renderings can serve more than marketing purposes, informing the design team of the architectural object's various aspects. Renderings can, therefore, be used not only as a means to communicate with stakeholders but as a tool for evaluating design options. Like work-in-progress drawings, this kind of image helps speed up the decision-making process thanks to the ability to simulate the actual materiality and light. Testing how a façade detail would be seen from different angles, figuring out color schemes and patterns across different scales of the project help designers make better-informed decisions and facilitate communication with the client. Moreover, with real-time rendering becoming more commonplace, architects will have at their disposal an increasingly faithful depiction of their design.
This article is part of the ArchDaily Topic: Rendering, proudly presented by Enscape, the most intuitive real-time rendering and virtual reality plugin for Revit, SketchUp, Rhino, Archicad, and Vectorworks. Enscape plugs directly into your modeling software, giving you an integrated visualization and design workflow.’ Learn more about our monthly topics. As always, at ArchDaily we welcome the contributions of our readers; if you want to submit an article or project, contact us. | https://www.archdaily.com/959868/the-different-functions-of-a-rendering |
A worker walks across a platform, 18 stories above the street. She moves with her welding equipment toward the corner junction she’s been working to connect for the last few days. As she nears the corner, she slips and tumbles off. Luckily, her safety harness kept her from falling, but it seems that there is a fray in the webbing around her leg–a fray that doesn’t look good. But she’s ok–she removes her VR headset and talks through the situation with her safety trainer to learn how this could have been avoided.
Astute AEC professionals, especially those with management aspirations at major firms, need more than just sketchpad skills and studio hours to understand all of the requirements involved in managing projects at a large construction firm. While design is a key part of the process of turning an idea into a physical structure, construction site management becomes a critical skill to help ensure that an architect’s vision reaches completion. Safety at a construction site is of critical concern, and new immersive technologies can help firms manage constructions sites more safely. While we talk a lot at Moduluc about the benefits provided by VR/AR in the design and presentation process, it is important to also take note of how these same technologies can be used by AEC firms to make their sites safer.
These technologies can provide solutions to critical issues for firms. Immersive media can provide safety training which exposes workers to variables impossible to introduce in a physical environment, capture a greater amount of data to improve individual and organizational safety protocols, and provide practical cost-cutting measures to improve overall efficiency. Properly implemented, immersive media can present tangible benefits to firms seeking to improve their business.
Ability to Take Risk
Murphy’s Law, in that whatever bad can happen will, has a way of sneaking onto a construction site. No matter how many controls and regulations are in place, random events can, and will, happen. So how do you train workers for this without unnecessarily exposing them to harm?
VR/AR training allows for risk to be injected into training that would not be possible in a physical environment. As an example, a well-done VR/AR experience can truly make someone feel like they are working on a steel beam 20 stories above the ground while safely in a classroom environment. Or a wrench could (literally) be thrown into a welding scenario, without having to expose the trainee to unneeded risk. The ability to introduce variables also prevents the ever-too-common “teaching to the test” dynamic, as well as limiting the tendency of trainees to “learn the hard part” in order to pass the training. Immersive experiences can place trainees into life or death scenarios that can make them more effective at responding to challenging situations- preventing injuries or worse in the process.
Data
Conducting virtual training makes it easier to assess the performance of individuals, and the collective group at the same time. With this information, managers can assess which individuals need more work in certain areas, and also see broader trends across the workforce which may require additional attention. Because VR/AR experiences usually allow for greater repetition, the pool of available data is deeper and more comprehensive. With this data, managers can assess current levels of risk while also working to mitigate developing concerns.
Cut Costs
Cost-cutting measures are not just for the bottom line, they also can help a firm have more flexibility to invest capital in other areas, including safety. Not only can VR/AR tools can be integrated into a firm’s design process to streamline development, but they can help make safety training more efficient and valuable to workers.
A basic example is harness inspection. In their most recent report, the Bureau of Labor Statistics reported 386 fatalities caused by falls, slips, or trips at construction sites in 2017. Harness inspection is a common training requirement for anyone working at a construction site, with serious implications if a worker is not proficient. But to conduct proper inspection training, workers need to be exposed to what “wrong looks like” on the harness. They need to inspect burns, cuts, tears, and frays and check the function of buckles and straps. Instead of buying these items just to then damage them for training purposes, firms can conduct the training in VR/AR while exposing their workers to the full range of possible problems with their equipment.
VR/AR training can reduce transportation and expenses associated with lost productivity as well. No longer do workers need to go to a separate site just for training, taking them away from the project. A VR/AR training module can be brought to the site and provided to workers right at their location. This also allows for site managers to quickly re-train workers who had a safety infraction. That person could be directly moved into a VR/AR simulation, re-certified on the training for their safety issue, and then put back to work with less loss of productivity.
Conclusion:
Amid the safety considerations, VR/AR is appealing to younger workers who are more familiar with virtual environments. Firms with VR/AR experiences as a core part of their safety training programs can attract workers who can see virtual training not just as a requirement, but an attractive part of the work experience. It is our assertion that AEC firms that integrate VR/AR into their design and safety protocols increase their chance of differentiating themselves from their competitors and create more efficient workflows throughout the building process. | https://blog.moduluc.com/how-vr-and-ar-can-make-aec-safer/ |
Design communication tools continue to push our industry. The infusion of Mixed Reality (AR/MR/VR), live modeling, and BIM 3D rendering in our projects allows Arrowstreet’s design teams and clients to step into a space from a fixed point as well as actually feel and experience the space in its entirety long before it’s built. Our mixed reality studio, known as Arrowstreet Innovation + Research (AIR) has gone beyond visualization to interactive simulation to predict how people will move through and use a space.
We incorporate gaming techniques that offer a more intuitive and engaging experience paired with data analysis to simulate how people will move through spaces and real-time feedback, allowing users to share thoughts and engage with the space on their own time. Ultimately, we seek to give everyone a voice by making ideas, buildings, and our cities visible and understandable to all.
Please direct inquiries to: | https://www.arrowstreet.com/visualization/ |
Respondents:
- Patrick McCafferty, PE, LEED AP, Associate Principal and Education Business Leader, Arup, Boston
- James Michael Parrish, PE, Associate Vice President, Department Manager Electrical, Lighting, Technology, Dewberry, Peoria, Ill.
- Tom Syvertsen, PE, LEED AP, Project Manager, Associate, Mueller Associates, Linthicum, Md.
- Kristie Tiller, PE, LEED AP, Associate, Team Leader, Lockwood Andrews & Newnam Inc. (LAN), Dallas
- Randy C. Twedt, PE, LEED AP, Associate Principal/Senior Mechanical Engineer, Page, Austin, Tex.
- Casimir Zalewski, PE, LEED AP, CPD, Principal, Stantec, Berkley, Mich.
From your experience, what systems within a college or university project are benefiting from automation that previously might not have been?
Tom Syvertsen: Since providing energy metering can help secure LEED credits, we have seen more projects with metering that provide the owner with energy usage data that they may not have previously had. Recently, we designed a few projects with a system that would allow the university to reduce air flow during favorable CO2 conditions.
Casimir Zalewski: With the advent and acceptance of BACnet, more and more systems have a common method for communication. Whereas in the past, there could easily have been multiple control systems within a building and even more monitoring systems at the institution’s central control. Today, many of these separate HVAC, plumbing, fire protection, security and similar systems can now communicate with one another. As colleges and universities continue to struggle with the dollars associated with maintenance staff, the ability to simplify control and monitor with tiered notification and alarms helps combat reduced maintenance staff levels and budgets. Monitoring and automation technology has continued to expand and with it, the buildings have continued to become smarter. Memory and data storage have continued to become more economical allowing greater amounts of information to be tracked, stored and compared. This analysis has allowed for greater real time diagnostics of a building’s health and efficiency, potentially reducing the overall operation cost.
What types of system integration and/or interoperability issues have you overcome for these projects and how did you do so?
Casimir Zalewski: Many projects have multiple manufacturers and automation systems operating within a project. The existing communication infrastructure, computing power and storage capabilities of these automation systems can be a roadblock with integration and automation. Typically, engaging industry experts on the equipment, legacy controls and communication and trade professionals who will be responsible to tie the system together is essential to overcome integration and interoperability on partial reuse projects. These individuals can help the engineer understand the existing system capabilities, the products that are compatible with existing equipment and infrastructure and what information may need to be shown in the construction documents for a successful project.
Is your team using building information modeling (BIM) in conjunction with the architects, trades and owner to design a project?
Kristie Tiller: Yes, we use BIM on every project we design. This allows better coordination across disciplines, therefore reducing change orders during construction.
Casimir Zalewski: It has been common practice to utilize BIM in our everyday project development. The extent or depth of how much detail is reviewed on a project by project basis. The type of project, the design and construction team and the client all play a role in what is the appropriate amount of BIM.
Tom Syvertsen: Yes, BIM is used on nearly every single higher education project we engineer and it aids in coordination with the architects and other disciplines. It can also be used as a tool to demonstrate the design to the owner as the design process progresses and ultimately, for ongoing stewardship and maintenance.
Have you included virtual reality or augmented reality in the design of such a project? Describe the application of such tools.
Casimir Zalewski: AR and VR continue to become more prevalent in our design process. While spaces were once traditionally defined as lines on paper with some renderings or sketches early in design, the use of 3D modelling and AR and VR have helped facility and user personnel understand what a space could be. It has allowed us to receive real-time feedback from the different stakeholders on how the space could be improved to help them do what they need to do. Our design charettes are featuring use of AR/VR more prominently in lieu of static images.
Tom Syvertsen: VR technology has been used by the design team to help walk clients through their buildings. These clients are able to make design decisions by what they can see while “inside the building.”
What types of smart building technologies are you specifying to allow for remote monitoring or to combat COVID-19 challenges?
Kristie Tiller: The biggest issue with controls and technology is making sure the existing systems are working as expected. COVID-19 is shining a light on all the things that were just a little bit out of spec and now must be addressed. There’s so much data coming out of the Centers for Disease Control and Prevention and World Health Organization that it’s difficult to determine the best path forward from day to day. So, we suggest to owners that they get everything working as designed as a baseline and then modify operations based on the latest findings from the experts.
How has “bring your own device” affected the design of technology systems in campus buildings?
Tom Syvertsen: The electrical and HVAC capacity of classrooms, breakout rooms and even lounge spaces has increased. We have designed buildings where there are receptacles at every classroom seat and every lounge seat. We have increased HVAC loads due to the heat output of laptops that are now brought to class on a regular basis.
How has your technology team worked with facility managers to implement security technology (biometrics, card-scan, etc.) in college and university projects?
Casimir Zalewski: The use of card readers and building/room level access continues to become more and more prevalent. Many buildings are moving to extended operation with more and more restrictions on free access to either the building in general or regions/rooms within the building. More planning is required earlier in the project to determine what infrastructure is needed at which location for when the building first opens and over time as technology continues to evolve and more and more focus is placed access control.
Do you have experience and expertise with the topics mentioned in this content? You should consider contributing to our CFE Media editorial team and getting the recognition you and your company deserve. Click here to start this process. | https://www.csemag.com/articles/students-tech-covid-drive-higher-ed-design-of-automation-and-controls/ |
The Walt Disney Studios
The Walt Disney Studios Immersive Production Technology (IPT) team is looking for an outstanding candidate to serve as a Producer. This role will help IPT to realize its strategic mission of finding better ways to make our movies today and developing new ways to experience our stories tomorrow. As part of this, this role will work seamlessly with key internal stakeholders and external innovation partners to deliver cutting-edge immersive (AR/VR/XR) digital experiences and tools.
To be successful, the Producer must be innovative, a quick learner, curious, and have a good understanding of the key technologies that will drive and advance the ideation, production, management and delivery of Disney content and experiences. The Producer must have solid communication abilities and work well in complex cross-functional teams. An ability to be flexible and thrive in a dynamic and fast-paced environment to meet the needs of our business are critical to success.
Responsibilities :
- Drive the planning and execution of high-complexity creative technology initiatives to deliver new immersive XR experiences for the Disney Studios Content.
- Launch new XR experiences across a robust slate, working with technical management, studio creative, marketing and legal to fulfill the go-to-market strategy.
- Oversee the scope, schedule, and budget for multiple simultaneous digital content development projects of varying size and complexity.
- Collaborate with Creative & Technical Leads to prioritize scope and delivery options without compromising the integrity of the creative vision.
- Manage technical and creative project resources within budget and within schedule.
- Identify and design best practices, processes, and procedures pertaining to the experiences production pipeline, implementation, and maintenance.
Operational Skills:
- Act as experiences product manager to define and align XR Product Roadmaps, clearly communicating the connection between vision, strategy, objectives, and roadmap
- Create and maintain product requirements, project scope, schedule, budget, staffing plans and product/project roadmaps
- Build, lead and motivate diverse teams that include creatives, various types of technologists, vendors, operations, logistics, and shared services
- Manage budgets, contracts, purchasing, invoicing and finances across projects
- Plan and execute internal and external play tests
- Confidential asset management experience between internal and external groups
- Drive communication across team members, project leads, executives, and project stake holders, creating and delivering information in different methods including verbal, written and visual
- Source, identify, and manage external vendors across multiple projects
Basic Qualifications : | https://jobs.interactiveimmersive.io/job/producer-immersive/ |
From Insight to Action: Three Ways Design Computation Empowers Better Decision-Making
Computational design tools are an integral part of the design process. These technologies allow planners and designers to translate reams of data into actionable insights for clients. They also give key stakeholders, like the community, more transparency into the design process and greater opportunities for collaboration and co-design.
Computational tools—which harness the power of computation to streamline decision making—were once considered “nice to have.” Now they are integral to the design process. So why should clients care?
The reason is simple. Computation gives planners and designers the ability to quickly translate thousands or even millions of data sets into actionable insights. Not only does this lead to better engagement with clients and the community, it also creates more successful projects.
While important to all aspects of design, it is especially relevant to planning neighborhoods, districts and cities. Here, we explore three main opportunities—and corresponding real-world examples—for the use of computational tools in urban planning projects.
Simplify the Design Process to Create More Tailored Outcomes
Opportunity: Computational tools can simplify the planning and design process by allowing project teams to organize and analyze mountains of data sets into leverageable insights.
Example: At Louisiana State University (LSU) in Baton Rouge, planners were tasked with developing a comprehensive long-term master plan grounded in data. Using computational tools, the project team was able to translate over a terabyte of data related to land use, ground water information, topography, trees, and use and conditions data about each building and room on campus into models. These models quickly showed how planning decisions would affect physical space and identify use patterns and opportunities. Further, the insights helped the university decide which facilities could be renovated or replaced, pinpoint the best areas for new investments, identify the most strategic targets for limited capital funding, and budget for the most impactful interventions on their historic land-grant campus.
Deepen Community Engagement, Co-Design and Input
Opportunity: Computational tools can make the planning process—and outputs—empathetic by giving communities more transparency into the design process, and more opportunities to provide feedback and build consensus with other stakeholders.
Example: On the LSU project, a 24-7 data exchange portal allowed planners to get input from students and staff on how they travel throughout the campus, including their typical paths and modes of travel, and note how they feel while moving across campus. On another project, the Wilburton Commercial Area plan, an upzoning planning study in Bellevue, WA, citizens advisory committee members were able to mark up a 2D map of the area with crayons which became automatic inputs for 3D tools, generating different city forms based on the land use ideas. This rapid visualization enabled quick iteration to build consensus around numerous differing inputs and collectively determine next steps.
Empower Clients to Make More Informed Decisions
Opportunity: Computational tools make the design process more collaborative by providing clients with the tools to make objective and informed decisions.
Example: The Oak Ridge National Laboratory in Knoxville, TN—the largest US Department of Energy science and energy laboratory—needed to develop an interactive 3D GIS-based decision-making tool to guide its multi-year planning and budgeting process for facilities and supporting infrastructure on the 300-acre Experimental Gas-Cooled Reactor (EGCR) campus. In response, the planning team created a tool with an easy-to-use interface that allows a user to easily manipulate physical campus planning scenarios and test and compare development options for feasibility and cost implications. The tool is now being used by the client team to test out potential sites on their campus to locate development projects as the need arises.
One important thread that weaves through the examples above is the growing interdependence between designers and planners, and the tools they use. The artful interweaving of data and information with empathy and intuition can improve our urban environments and create lasting results for clients and the community. | https://www.nbbj.com/ideas/from-insight-to-action |
In modern construction projects, architects, engineers, contractors, and owners who are actively involved in projects use different methods of visualization, such as building information modeling (BIM), physical mock-ups, virtual reality (VR), and augmented reality (AR), to support the conceptualizations, representations, and final appearances of their design ideas. These approaches can support design visualization for decision making before final construction. However, unlike BIM visualization where users can only interact with a virtual environment and physical mock-ups where the interaction is only with physical design components, AR can merge physical conditions with computer-generated visual information. This may aid professionals in the architecture, engineering, and construction (AEC) industries during design and constructability review sessions. This research studied the use of AR through a structured design review activity involving current industry practitioners in design and constructability review sessions and tested whether the combined virtual-physical nature of AR facilitates the same actions and outcomes suggested by prior work that used virtual reality-based mock-ups or physical mock-ups. In addition, this study identified and analyzed various actions such as decision making, problem solving, and design alternatives that occur as users interact with AR on different mobile computers. This analysis provides an understanding of how different mobile computers, such as wearables and handheld devices showing the same technical AR environment, can lead to different actions among users. This research found that AR can facilitate some of the actions of virtual reality and physical mock-ups in design and constructability review sessions, including decision making, design alternatives, and descriptive, explanative, and problem-solving actions. In addition, different mobile computers led to different observed actions during the design review sessions. For example, handheld devices between 15.24 cm (6 in.) and 25.5 cm (10 in.) enabled more decision-making actions than any other device tested. As additional testing is completed, future findings may be compared with those presented in this work to determine the actions that are consistently seen with AR in design and constructability review sessions. Eventually, this may provide a valuable tool to allow future researchers and practitioners to strategically plan for AR technology use based on what specific human actions are desirable for a given application. | https://asu.pure.elsevier.com/en/publications/mobile-augmented-reality-to-influence-design-and-constructability |
Description: MPL is a multidisciplinary eLearning XR services company with over 14 years’ experience in Colombia and in global projects in the region. We are a group of professionals passionate about knowledge, and even more so in being able to transfer it.We work collaboratively, covering instructional design, consulting, visual design, technology use and development, and user experience to facilitate knowledge transfer to our corporate, education, and government clients.Our projects have supported continuous improvement, productivity, and efficiency, reaching more people in less time and with a cost reduction of up to 40% compared to in-person scenarios.MPL has a portfolio of eLearning 4.0 services that integrate leading-edge technologies and methodologies for knowledge transfer in four business lines:INVENZA: immersive XR technology solutions for adaptive learning. Integrating AR, VR, MR, 360° panoramic videos and images in the cloud and viewed from an AppASIRA: virtual and immersive cloud platform service of collaborative learning environmentseDesign: Development of highly-interactive customized virtual content and off-the-shelf trainingROCÍN: virtual multicultural Spanish language program as a foreign language for secondary schools in Europe, Brazil, and the United States. | https://ec2-35-176-109-236.eu-west-2.compute.amazonaws.com/directory-of-companies/mpl-elearning-xr-services/ |
Dan Arieli , the author of “predictably irrational” who is the James B. Duke Professor of Psychology and Behavioral Economics, writes about his personal experiences with pain management by hospital staff detailing the pain he suffered when nurses had to change his bandages, treating his severe burns, which were caused by an accident that occurred in his youth.
He mentions that the nurses went about this task quickly and even roughly, because they were affected by the pain they were causing him. Prof. Arieli’s suffering was very visible and easily heard by the staff, and they were in fact taking part in inflicting it on him for the sake of his treatment. He raises an interesting question about the possibility of conflicting interests between the caring nurses and the patient when it comes to pain assessment, evaluation, control and management .
A nurse’s difficulty to perform certain procedures that are painful to patients, even when it's for their own good is not discussed or studied often. Contrary to what some unfortunate patients might think, medical professionals generally can't stand the idea of harming their patients (as prof. Arieli found out in his research on the subject). This conundrum becomes even more difficult for nurses in the ICU and in anesthesia units.
The Challenges of Pain Management in the ICU and Anesthesia Units
Pain management is one of the biggest challenges for nurses in critical care and surgery settings, as there are many considerations to be considered when deciding on the correct treatment.
Patients who end up in the ICU often suffer from multiple conditions, making pain management more complex than it would seem . In addition to the variety of ailments that need to be treated, pain management in the ICU can also be challenging because of inadequate tools, a high ratio of patients to healthcare staff, and poor response to pain medications. Another issue is the communication with patients regarding their pain levels, which one cannot always take for granted. Critically ill patients are not always able to communicate verbally, making pain assessment by the staff even more challenging.
Pain management and sedation during and after surgery is complex as one must consider limitations due to the surgery as well as drug interactions that might be caused by medication that the patient takes for other conditions . Another challenge in postoperative care is selecting the best analgesic protocols for helping patients tolerate the discomfort after surgery . A research recommends an evidence-based evaluation as well as collaboration of all the involved healthcare staff (e.g. anesthesiologists, surgeons, nurses, and physiotherapists) for the development and integration of specific surgical procedures . Such a research would also help further develop advanced approaches for postoperative patient-controlled analgesia (PCA).
A study found that pain management continues to be a serious challenge that many hospitals need to address. However, methodic documentation and intervention for pain management has been found effective. Guidelines and assessment scales were developed to help nursing staff conduct better pain management interventions.
Guidelines for Effective Pain Management in the ICU
Each healthcare facility is responsible for developing their own protocols, but there are several guidelines , which are considered standard (and even mandatory) by the medical community. Here are some examples:
Asses pain routinely – four times per shift
Nurses are advised to assess their patients’ pain levels on a routine basis. Doing so several times per shift will ensure that nurses are up to date and able to respond to the changes of these levels and the patients’ reaction (or lack thereof) to any treatment of their pain.
Get a personal report whenever possible, if the patient can communicate.
There will be times when a patient's self-described pain level is different than what a nurse typically sees in someone suffering from the same condition. It's generally best to take the word of the patients – no one knows what they're feeling as well as they do.
Use scales and tools for pain assessment of nonverbal patients.
Unfortunately, many patients in the ICU will be unable to communicate clearly or at all. In these scenarios, medical professionals should rely on scales (or scores) to assess the level of pain.
A study finds that using a nonverbal pain scale in the ICU improved patients’ satisfaction with regards to their pain experience during their hospitalization. The study also found that pain is documented better and the nurses feel more secure in their assessment of pain levels.
The Behavioral Pain Scale (BPS) and Critical Care Pain Observation Tool (CPOT) are considered reliable scores by the medical community.
Treat pain with or without drugs, as the case warrants.
Pain management doesn't always require drugs. In fact, some patients would prefer to avoid taking unnecessary drugs due to side effects or personal inclinations. Relaxation therapy, heat and cold therapy, and similar techniques are all valid options for managing mild pain. For more severe pain drugs may be administered.
Treat pain first, then sedate.
You want to avoid sedating a patient while they are still in pain. Ideally, they should be comfortable before and after sedation to reduce stress and trauma.
Know when to use an incremental bolus or continuous infusion during intravenous fluid regulation for pain management.
The goal of pain management in the ICU and in anesthesia units is to get your patients as comfortable as possible, quickly. Therefore, a continuous infusion isn't appropriate for patients in severe pain because the time it takes for them to get relief will be miserable. Instead, an incremental bolus should be used to get the effective dose in their system quickly so you can assess their pain and make adjustments sooner.
However, in cases of which there is a need to maintain deep sedation, a study finds advantages in the continuous infusion technique (the research examined maintaining sedation during surgery), although both techniques showed satisfactory conditions.
Switching between an incremental bolus and continues infusion, especially when administering multiple drugs, is not only a hassle – it creates more opportunities for safety risks and human error. Pain management can be easier, safer, and far more effective with a minimal residual volume luer-activated stopcock, such as the MarvelousTM stopcock.
How the MarvelousTM Stopcock Assists with Pain Management
The MarvelousTM stopcock helps with drug administration due to its self-flushing function and the fact that its internal volume has little "dead space" for drug accumulation. This prevents any residue from a previously used drug from remaining in the stopcock, as well as preventing any liquid from remaining and creating a breeding ground for bacteria.
Working with MarvelousTM also reduces stopcock manipulations and with it the room for medical errors. By eliminating some of the required tasks associated with traditional stopcocks, medical staff can provide more attention to their patients, assessing their pain levels, and knowing for certain that the drugs are being administered properly.
"I like MarvelousTM/ due to its self-flushing feature that helps us flush out any potential drops of Propofol or other medications that may be left behind in the luer before the patient goes to the PACU”, says Dr. Mike Bradstock, Maricopa Medical center, Phoenix, Arizona, USA.
“Currently it is reportable if we do not completely flush the side ports of the stopcocks before sending the patient to the step down unit. Using MarvelousTM helps us avoid this error and makes it safer for the patient" he adds.
Pain Management Improvement is an Ongoing Challenge
Pain management is a continuous effort which involves all healthcare professionals. An academic research on the subject of structured approaches to pain management in the ICU finds that an ongoing, regular and open discussion between doctors, nurses and pharmacists is key to the continued improvement of pain management approaches. The effort is worth it, as improved pain management is beneficial to both patients and nursing staff.
To learn more about Marvelous and how it can help you with pain management in the ICU and anesthesia units, come visit us at NTI at New Orleans booth #3226 and at WCNA in Glasgow booth #4.
CONTACT US to schedule a meeting at these conferences.
Resources: | https://www.infusesafety.com/the-challenges-of-pain-management |
If you have a herniated disc, you know how painful it can be. Not only can a herniated disc cause severe pain, but it can compress and irritate nearby nerves. Luckily there are procedures that can help, such as a discectomy.
What is a Discectomy?
A discectomy is a surgical procedure that removes the damaged portion of a disc in the spine. The procedure is sometimes suggested if more conservative, nonsurgical treatments have not been effective.
There are several ways in which a discectomy can be performed. Most surgeons prefer a minimally invasive discectomy, which utilizes small incisions.
The procedure is performed to relieve the pressure placed on the spine from a herniated disc. A discectomy is usually recommended for people who:
- Have difficulty standing or walking due to weakness.
- Have not had relief through conservative treatments such as physical therapy and steroid injections.
- The pain has moved to the arm, chest, legs, or buttocks.
Discectomy Procedure
The discectomy procedure has been proven to be very effective at eliminating pain caused by disc herniation. After the procedure, patients typically experience substantial relief from spinal compression. Many find they are able to do tasks with less pain.
Before the procedure: Before a discectomy, you will be required to avoid eating and drinking for a specified amount of time. In addition, you may be told to pause some medications.
During the procedure: Most surgeons perform a discectomy with general anesthesia meaning you will be unconscious during the procedure. During the discectomy, small amounts of spinal bone may be removed so the surgeon can access the affected disc. If there is a fragmented disc that is pinching a nerve, ideally this will be removed. Sometimes an entire disc may be removed. If this is the case, the surgeon may fill the space with a piece of bone from a donor or from your pelvis.
After the procedure: Following the discectomy, you will be moved to a recovery area where your healthcare team will monitor you until you can go home. Some patients are able to return home on the same day as the procedure.
Discectomy Recovery Time
Recovery time for a discectomy ranges from two to six weeks, depending on how severe the herniation was. Some patients are able to return to work in just two weeks. Others, whose occupations are very physical, may be advised to wait two months before returning to work. Your doctor will help you determine what the right plan is for you.
Discectomy Risks
Although a discectomy is largely considered a safe procedure, all procedures come with risks. Potential complications and risks of discectomy include:
- Bleeding
- Infection
- Injury to the blood vessels or nerves near the spine
- Leaking spinal fluid
Learn More About a Discectomy Procedure at The Orthopedic Clinic Today
At The Orthopedic Clinic, we want you to live your life in full motion. If you’re feeling pain and discomfort associated with a herniated disc, let us help you get back to doing the things you love.
Call us at (386) 255-4596 to schedule an appointment. | https://orthotoc.com/discectomy/ |
Tooth Extraction is a regular dental procedure. It comprises the removal of a tooth from its socket in the bone. The tooth is extracted so that it does not affect the adjoining teeth. It cannot be avoided in cases such as severe tooth decay, damage, or serious injury to the tooth. Dental extractions are deemed to be fit as the gums have a natural repairing quality. The procedures are considered to be safe and practical.
What is the process of Tooth Extraction?
As a first step, the dentist will examine the tooth with a discussion about your dental and medical history. Based on this information, a decision is taken if you are fit for an extraction. X-rays are taken to determine the shape, length, and location of the tooth and the surrounding bone. In case of complex health conditions, the patient might be referred to an oral surgeon.
The actual procedure of pulling out the tooth begins with the numbing of the area around the tooth using a local anesthetic. The tooth is loosened with an instrument called an elevator and removed with dental forceps. A surgical extraction might require intravenous anesthesia to induce sleep. The oral surgeons will make a small cut into the gums and remove the damaged or infected tooth.
What should you do after the extraction?
The most crucial thing after a tooth extraction is to keep the area surrounding it clean to prevent infection. To stop the bleeding, a dry piece of sterile gauze is kept in the empty space for about 30-45 minutes. For the next twenty-four hours, you shouldn’t smoke or rinse your mouth energetically.
A certain amount of pain and discomfort will persist after an extraction. An icepack might help to decrease the pain and swelling. It will be best to avoid hot foods. The discomfort will subside in three to four days. However, if you have persistent pain and bleeding, it is best to consult your doctor immediately.
When should you decide on a tooth extraction?
Dental extractions apart from damage and injuries become mandatory in the following cases:
- Crowded Teeth:As we age, the likelihood of crowded teeth increases. If you have four wisdom teeth, there are probabilities that your mouth gets crowded, and the other teeth start intersecting it. This may cause difficulties in chewing and biting, giving you a crooked smile. The extraction of the wisdom teeth will lead to proper alignment and enable a flawless smile.
- Infection:Damaged or decaying teeth can spread to the nerves and blood vessels of the tooth, causing infection. The bacteria in the mouth may accumulate and build up, causing infection to the gums, ultimately leading to serious diseases. The removable of the infected tooth is the best alternative to prevent further conditions.
- Gum Disease: It is an infection of the bones and tissues that support the teeth. The infected gums may not be able to support the tooth making it difficult to chew and bite. In such cases, the removal of the infected tooth will be the best option. | https://potomacdentalgroup.com/tooth-extractions/ |
Kyphoplasty and Vertebroplasty have become standard surgical procedures for the treatment of vertebral compression fracture. In this, the body collapses into itself, producing a wedged vertebra.
A vertebral compression fracture can lead to back pain, reduced physical activity, depression, loss of balance, and difficulty in sleeping.
Both procedures are similar and perform through a hollow needle that is passed through the skin of your back into the fractured or damaged vertebra.
In Vertebroplasty, bone cement is injected through the needle into the fractured bone. In kyphoplasty, a balloon is first inserted and inflated to expand the compressed vertebra to its normal height before filling the space with bone cement. The procedures are repeated for each affected and damaged vertebra. The cement-strengthened vertebra allows the patient to stand straight and reduces the pain.
How is Vertebroplasty And Balloon Kyphoplasty performed?
- In both the procedures, you need to lie down on the stomach. The area in which the hollow needle will be inserted needs to be shaved if necessary and then cleaned and sterilized. In some cases, a local anesthesia may also be injected at the same place.
- And then, you will be connected to monitors that track your heart rate, blood pressure, oxygen level, and pulse.
- You may be given medication to help in preventing nausea, pain, and also antibiotics from assisting in avoiding any kind of infection.
- The area through which the hollow needle or trocar will be inserted is sterilized with a cleaning solution and covered with a surgical drape.
- Near the fracture, a local anesthetic is then injected into the skin and deep tissues.
- A minimal surgical cut is made at the site.
- With the x-ray/C-ARM guidance, the trocar is passed through the spinal muscles until its tip is precisely positioned within the damaged vertebra.
- In Vertebroplasty, the bone cement is then injected. Medical-grade bone cement hardens quickly, typically within 20-30 minutes. The trocar will be removed after the cement is injected.
- In kyphoplasty, the balloon tamp is first inserted through the trocar, and the balloon is inflated to create a space. The balloon is then removed, and the bone cement is injected into the space created by the balloon.
- Once the cement is in place, the needle or trocar is removed.
- The area is bandaged. Stitches won’t be necessary.
Why Choose Minimally Invasive techniques like Vertebroplasty & Balloon Kyphoplasty?
- The first reason to choose Vertebroplasty & Balloon Kyphoplasty is that they are one of the safe and effective procedures for your spine.
- Only a small surgical cut is required in the skin, which does not need any stitches.
- Without any form of physical therapy or rehabilitation, Vertebroplasty and Balloon kyphoplasty can increase patients' functional abilities and allow them to return to their previous level of activities.
- These procedures are usually successful at relieving the pain caused by a vertebral compression fracture; many patients feel significant relief almost immediately or within a few days. Even many patients become symptom-free.
- Following Vertebroplasty, about 75 percent of patients regain lost mobility and become more active, which helps combat osteoporosis. After the treatment, immobile patients can get out of bed, which can help reduce their risk of pneumonia. The increased activity builds more muscle strength, further encouraging mobility.
When to opt for Vertebroplasty & Balloon Kyphoplasty Treatment?
If you have incapacitating and persistent severe focal back pain, which is related to vertebral collapse. The procedures of Vertebroplasty & Balloon Kyphoplasty involve the injection of acrylic cement into the damaged vertebral body. The procedures are performed under local anesthesia. Both procedures relieve pain; however, in kyphoplasty, a balloon tamp is inflated within and between the fracture fragments before the cement is infused to restore vertebral body height.
What are the Preventions tips?
- Proper Medication: After the treatment, there will be little discomfort and pain, which can be managed by appropriate medication such as narcotic medication. However, these medications can cause constipation, so it is essential to keep yourself hydrated and eat food high in fiber.
- Incision Care: it is crucial to keep the surgical cut area or incision covered and dry for 24 hours post-surgery. After that, you can take a shower and need to avoid the bathtub.
- Avoid activities like bending, pushing, stretching, or pulling movements for several weeks.
- Avoid heavy lifting. Do not lift anything over five kilograms.
- No driving for two weeks
- Avoid long sitting upright for more than 20-30 minutes and take a short walk.
- Walking is very crucial for healing and speedy recovery. Initially, you should
Get Best Vertebroplasty & Kyphoplasty spine treatment in Udaipur:
One of the leading spine hospitals in Udaipur, Shriram Spinal Hospital, provides the best Vertebroplasty & Kyphoplasty spine treatment in Udaipur. Also Shriram Spine Hospital is successfully performing Vertebroplasty & Kyphoplasty surgery in Udaipur with the latest techniques and equipment.
It has a team of dedicated, trained, and acclaimed spine surgeons providing cutting edge medical and surgical technology.
Shriram Spine Hospital provides surgical treatment for Spinal Fracture, Lumbar Micro-Endoscopic Discectomy, MIS TLIF, Balloon Kyphoplasty and Vertebroplasty, Cervical Disc Replacement, Anterior Cervical Discectomy, Spinal Tumor Excision.
Frequently Asked Questions
Are balloon kyphoplasty and Vertebroplasty the same?
Both are minimally invasive surgical procedures performed to treat vertebral fractures and work to stabilize them. The kyphoplasty is a newer technology that includes a balloon tamp, which is inserted through the needle, and the balloon is inflated to create the space. And then, the balloon tamp is removed, and bone cement is injected. But in Vertebroplasty, the bone cement is directly injected through the needle.
How long does it take to perform a balloon kyphoplasty procedure?
Balloon kyphoplasty usually takes about a half-hour per spinal level treated.
What kind of anesthesia is used during the procedure?
Kyphoplasty may be performed using local anesthesia (you are in conscious, only the affected body part gets numbed) or general anesthesia (completely unconscious)
What are the benefits of balloon kyphoplasty?
Improvement in mobility means fewer days required to stay in bed because of back pain, a low complication rate, and improvement in the quality of life.
The bottom line on Vertebroplasty?
The procedures have been used with great success to repair fractured vertebra and relieve patients’ pain. | https://shriramspinehospital.com/service/balloon-kyphoplasty-and-vertebroplasty/ |
How do you remove a cyst sac?
Medical procedures for cyst removal
- Drainage. Under local anesthesia, a doctor will make a small incision through which the cyst can be drained.
- Fine-needle aspiration. For this procedure, a doctor will insert a thin needle into the cyst to drain the fluid.
- Surgery.
- Laparoscopy.
Does a cyst sac need to be removed?
In the majority of cases, a cyst that’s benign really doesn’t need to be removed unless it’s causing pain, discomfort, or confidence issues. For example, if there’s a cyst on your scalp and your brush constantly irritates it and causes you pain, it’s worth talking to your doctor about getting it removed.
How painful is cyst removal?
Does a Cyst Removal Hurt? If you can handle the small sting of a shot, you can handle a cyst removal. The doctor first topically numbs the cyst area and then injects Lidocaine. You may feel a slight sting, but that’s the worst part.
Can I remove a cyst myself?
It might be tempting, but don’t try to pop or drain the cyst yourself. That can cause infection, and the cyst will probably come back. Keep it clean by washing with warm soap and water. Try putting a bathwater-warm washcloth on it for 20 to 30 minutes, three to four times a day, to help soothe it and speed healing.
What type of doctor removes cysts?
What Type of Doctors Treat Cysts? While most primary care doctors or surgeons can treat cysts on the skin, dermatologists most commonly treat and remove sebaceous and pilar cysts. Dermatologists are focused on treating the skin — so removing cysts is a natural part of their training and focus.
What is inside a cyst?
A cyst is a sac-like pocket of membranous tissue that contains fluid, air, or other substances. Cysts can grow almost anywhere in your body or under your skin. There are many types of cysts. Most cysts are benign, or noncancerous.
Are you put to sleep for cyst removal?
You may be given a sedative along with a local or regional anesthetic to relax you and reduce anxiety. A general anesthetic relaxes your muscles and puts you to sleep. All three types of anesthesia should keep you from feeling pain during the operation. Your health care provider will cut around the cyst and remove it.
Is it painful to have a cyst removed?
If you had a cyst excised, you’ll have stitches inside and outside to minimize scarring. Patients may experience tenderness and mild pain after an excision, easily managed with at-home pain medication such as Advil.
What kind of doctor can remove a cyst?
How long does it take to recover from ovarian cyst removal?
The time it takes to recover from surgery is different for everyone. After the ovarian cyst has been removed, you’ll feel pain in your tummy, although this should improve in a few days. After a laparoscopy or a laparotomy, it may take as long as 12 weeks before you can resume normal activities.
How do they remove a cyst from the ovary?
If you have a large cyst, your doctor can surgically remove the cyst through a large incision in your abdomen. They’ll conduct an immediate biopsy, and if they determine that the cyst is cancerous, they may perform a hysterectomy to remove your ovaries and uterus. Ovarian cysts can’t be prevented.
How to drain a cyst on ovary?
Treatment of ovarian cysts. Surgery to remove the cyst may be needed if cancer is suspected, if the cyst does not go away, or if it causes symptoms. In many cases it can be taken out without damaging the ovary, but sometimes the ovary has to be removed. In rare cases an ovarian cyst may be drained during laparoscopy.
How do you remove cyst from ovary?
No treatment may be needed for a ruptured ovarian cyst other than comfort measures. But if severe bleeding occurs, surgery may be needed to stop the blood loss. There is no way to prevent an ovarian cyst from rupturing. | https://www.denguedenguedengue.com/how-do-you-remove-a-cyst-sac/ |
The Pediatric Anesthesia team at MassGeneral Hospital for Children specializes in caring for children before, during and after surgery and other procedures. Our team consists of board-certified anesthesiologists who specialize in pediatric anesthesia, at times working in conjunction with anesthesia residents and certified registered nurse anesthetists. We strive to put children and their and families at ease while alleviating the discomfort and pain associated with surgery and other procedures.
The Pediatric Anesthesia team provides the following services:
- Anesthesia for surgery, MRI, CT, invasive radiology, radiation therapy, GI endoscopy and other diagnostic and therapeutic procedures
- Consultations and follow-up care for surgery patients
- Consultations with pediatric surgeons in airway management, pulmonary and cardiovascular assessment and patient resuscitation
- Pain management, in conjunction with the Pediatric Pain Management Team
- Monitoring systems and anesthesia equipment specially designed for the care of pediatric patients
Our Research
Pediatric anesthesia research activities cover a wide spectrum of topics. Basic science research being performed by members of the division focuses in two main areas:
- The study of fundamental mechanisms of lung injury in the pediatric population and development of new therapies for pulmonary vascular disease and
- The study of apoptosis (“programmed cell death”) in muscles following critical illness, with the goals of characterizing the mechanisms of skeletal muscle dysfunction and providing therapeutic options to improve muscle function
Clinical studies include pharmacotherapeutics in critical illness, with specific focus on burned patients, as well as pharmacokinetics of new drugs in the pediatric population.
Locations
The Pediatric Anesthesia team provides clinical anesthesia services at MassGeneral Hospital for Children, Shriners Hospitals for Children- Boston and the Francis H. Burr Proton Therapy Center. | https://www.massgeneral.org/children/anesthesia/default-old |
The Anesthesia Service provides the following:
- Multi-species general anesthesia, pain management, and local anesthetic services for patients in the Veterinary Teaching Hospital's Small Animal and Large Animal hospitals
- Anesthetic services for all surgeries performed in the hospitals, as well as for animals undergoing diagnostic procedures that require general anesthesia
- Consultation for other sub-specialties within the hospitals seeking solutions to sedation and pain management problems
- Consultation with the Animal Cancer Care and Research Center and the Marion duPont Scott Equine Medical Center for routine and emergent patients
- Consultation with local veterinarians on anesthetic emergencies or on planning for difficult or challenging anesthetic cases
All of our patients undergo a careful physiological assessment prior to being premedicated and anesthetized. Extensive monitoring with state-of-the-art equipment, analgesia, and intra-operative fluid administration are conducted throughout the procedure and until the patient has fully recovered consciousness.
Anesthesia personnel
-
Bio ItemRachael E. Carpenter, DVM , bio
Clinical Instructor, Anesthesiology
-
Bio ItemNatalia Henao-Guerrero, DVM, MS, DACVAA , bio
Department Head, Small Animal Clinical Sciences; Associate Professor, Anesthesiology; Service Chief, Anesthesiology
-
Bio ItemMarcela Machado, MV, MSc, MS , bio
Assistant Professor, Anesthesiology and Pain Management
-
Bio ItemVaidehi Paranjape, BVSc, MVSc, MS, DACVAA , bio
Assistant Professor, Anesthesiology and Pain Management
Central sterile services (CSS) personnel
The Veterinary Teaching Hospital's CSS technicians assist surgeons during multi-species procedures for routine, emergency, and after-hours cases; instruct senior veterinary students by demonstrating and maintaining aseptic technique preoperatively, intraoperatively, and postoperatively; maintain the daily flow of surgical operations to ensure availability of personnel and operating rooms for all procedures; and perform central sterile procedures, including cleaning, packing, and sterilizing surgical instruments and supplies. | https://vth.vetmed.vt.edu/inpatient-outpatient-services/anesthesia-and-pain-management.html |
Lumbar spinal stenosis (LSS) is a common condition that occurs in the aging spine of individuals beyond their fifth decade of life (Fig. 1). Most patients who undergo surgical intervention for LSS are in their sixth and seventh decades of life.
The incidence of LSS in the United States has been estimated at 8 percent to 11 percent of the population. As the “baby boomers” age, an estimated 2.4 million Americans will be affected by LSS by 2021. With the first wave of baby boomers just qualifying for Medicare, this condition will undoubtedly have an impact on government healthcare spending. The adjusted rate of lumbar stenosis surgery per 100,000 Medicare beneficiaries was 137.4 in 2002 and 135.5 in 2007; these numbers are expected to double in the coming years due to the increased numbers of older adults.
The current accepted treatment algorithm for LSS begins with nonsteroidal anti-inflammatory drugs and narcotics, physical therapy, and pain management modalities such as epidural steroid injections. Over the long term, 15 percent of patients will improve with nonsurgical modalities, and 70 percent will continue to experience neurogenic claudication. Therefore, most patients with LSS will, in time, require surgical intervention for a more definitive treatment.
Pain management
Current trends suggest that the numbers of pain medicine prescriptions and interventional pain management procedures are increasing. From 1997 to 2005, the cost of pharmaceuticals, outpatient procedures, and inpatient procedures for treatment of neck and back pain increased by 171 percent, 74 percent, and 25 percent respectively. Although the cost of pharmaceuticals and outpatient procedures (mostly interventional pain procedures) have increased more than the cost of surgical procedures, media reports continue to focus on surgery as a contributor to increasing costs.
Oversight on the amount of prescription medication used or interventional procedures performed on patients with LSS is limited. Furthermore, most spine surgeons don’t perform lumbar epidurals themselves; when they refer patients to pain management colleagues, surgeons lose control of the treatment regimen for patients.
Many patients who are fearful of surgical intervention are resorting to nonsurgical modalities for temporary relief, irrespective of the often limited results. The media has also been very critical of the number and types of spinal surgeries being performed in the United States. Most articles focus on surgeons who perform unnecessary surgeries with costly spinal implants and undoubtedly have an impact on patients’ decision-making.
Other interventions
Numerous studies have shown that surgical intervention has a higher success rate in treating LSS compared with nonsurgical modalities. SPORT (Spine Patient Outcomes Research Trial), a large prospective randomized clinical trial, compared surgical and nonsurgical treatment for LSS. Although the statistical methods used to analyze the data were criticized, SPORT clearly showed that surgery is superior to nonsurgical treatment of LSS at 2 years. But this does not necessarily mean that nonsurgical modalities should not be offered or that surgery should be the first line of treatment for LSS.
Most surgeons would agree that epidural injections provide some benefits for patients. A subset of patients may even obtain long-term relief and avoid surgery. In addition, the patient and the surgeon will learn more about each other during a period of conservative treatment. If a more conservative approach is unsuccessful, patients are more comfortable considering their surgical options. Three to six months of nonsurgical care prior to surgical intervention is standard for the treatment of LSS.
Taking care of spinal stenosis patients requires a comprehensive approach utilizing the expertise of surgeons, pain management specialists, physical therapists, and others. However, at times, differences in philosophy can be found between the surgeons and other specialists taking care of these patients. As an example, when a pain specialist was recently asked how many times a procedure could be repeated if the patient’s pain returns, he answered, “Ad infinitum.” Compare this philosophy with orthopaedic principles that support using progressive treatment modalities that not only relieve symptoms but also permanently address the condition. Surgeons should therefore ask patients to return for a follow-up after one or two epidural steroid injections or few session of physical therapy to maintain communication and assess progress.
If injections do not bring relief, discussing next steps with the patient is important. Patients who are in severe pain often are willing to try any procedure that they are offered; a discussion with a surgeon is critically important, especially if surgical intervention can finally offer the patient definitive treatment for his or her neurogenic claudication from LSS.
Surgical intervention
But how successful is surgery for treatment of LSS, and what type of surgery is most effective? The controversy over spine surgery for treatment of “back pain” generally centers on fusion surgeries for treatment of discogenic back pain or back pain without buttock or leg pain. LSS patients rarely complain solely of back pain; their most common complaint is buttock and leg pain.
Decompression and laminectomy for treatment of LSS symptoms show consistently good to excellent results. The data from SPORT also support the benefits of surgery. The Maine Lumbar Spine Study shows that 80 percent of patients are happy with their surgical results 8 to 10 years after surgery.
The longer patients are followed, however, the less successful the results become. This is partly due to the progressive nature of spinal degeneration. The same vertebral level can progress with further foraminal stenosis (when the level is not fused) or symptomatic stenosis can develop in untreated levels.
This progression of disease is not unique to the spine and does not define treatment failure. For example, patients who have bypass grafts or stenting for coronary artery disease may have re-stenosis of the same vessels or additional coronary arteries following a successful surgery. It is, however, important to differentiate between “back pain” and LSS when discussing the results of spine surgery.
Fusion remains an important part of the spine surgeon’s armamentarium. When used judiciously with good indications, it offers patients significant benefits. Several studies have shown that patients undergoing laminectomy for treatment of LSS have better functional improvement when they have concomitant posterolateral fusion. Use of internal fixation increases fusion rates and further improves patient outcomes, especially when patients are followed long term (beyond
5 years postoperative).
When the main indication for performing spine surgery is to decompress the nerves in the spine, the results are good—whether or not fusion surgery is performed. Fusion is indicated when inherent instability, iatrogenic instability, or a need to correct a deformity exists. Inherent instability can be seen on preoperative dynamic films. Spine surgeons who attempt to decompress the nerves to the best of their ability often have to remove so much bone that the spine is rendered unstable and requires stabilization.
A more controversial goal of fusion surgery is to stop or retard the progression of degeneration at the operative level. This remains a controversial indication, with conflicting results.
Laminectomy surgery is not without its complications. Nerve injury, dural tears, and postoperative epidural hematoma resulting in paralysis have all been reported.
Elderly patients with LSS may be more attracted to less invasive surgeries such as the use of tubular retractors to target the stenotic level(s) with minimal injury to the surrounding soft tissues. This approach may speed recovery, but minimally invasive surgery should not equate to minimal treatment for the patient. If tubular retractors limit the surgeon’s visualization, resulting in inadequate decompression, the patients haven’t benefitted from the smaller incisions.
The most recent addition to the LSS treatment armamentarium are interspinous devices. These devices can be implanted with local anesthesia and without the need to remove bone or soft tissue. They pose no serious risk (nerve or dural injury) to the patient and are more effective than epidural steroid injections, with a 60 percent to 70 percent success rate at 4 years after implantation. Patient selection, however, is an important determinant of success.
Less invasive interspinous devices are being investigated. One such device is now in a clinical trial comparing its success to that of current devices on the market.
Conclusion
LSS will remain an important part of a spine surgeon’s surgical practice. With the growing elderly population, surgeons need to remain focused on the patient’s needs and their individualized indications for the various treatment options. A customized treatment plan for each patient provides optimal results. Technological advances enable some qualified patients to be treated with less invasive surgical options. | https://www.aaos.org/aaosnow/2011/may/clinical/clinical10/ |
Vertebroplaty is a surgical procedure designed to stop the pain caused by a spinal fracture. This involves making a small incision in the back through which the doctor places a narrow tube. Using fluoroscopy to guide it to the correct position, the tube creates a path through the back into the fractured area through the pedicle of the involved vertebrae. The doctor then uses specially designed instruments under low pressure to deliver a cement-like material to stabilize the facture.
Pelvic fractures (sacral) are common particularly in patients with osteoporosis. When the sacrum fractures, pain is produced that can be debilitating. Unlike and extremity fracture we cannot provide a direct plaster or fiberglass cast. Conservative therapies such as rest and analgesics can help but many fractures do not respond to such treatment. Sacroplasty is a technique that “internally casts” a sacral fracture with liquid cement and provides significant pain relief.
Cryoablation is used for palliative pain control of tumors that involve the bone and/or soft tissues.
Liver Radiofrequency ablation is a treatment used in the treatment of inoperable primary liver cancer or metastatic tumors. Using Ultrasound guidance a probe is placed which will then deliver a High-frequency electrical current is used to destroy cancer cells.
Cryoablation is the process of using freezing temperatures to destroy cancer cells. It is used to treat tumors that have originated in the liver or have spread to the liver from another site. Cryoablation is often used as an alternative or an adjunct to conventional surgery.
During cryoablation, a probe circulating liquid nitrogen is placed in contact with the tumor, causing the cells to freeze. The tumor is frozen, thawed, and refrozen until the malignant cells are completely destroyed. This process is monitored with ultrasound in order to preserve as much nearby healthy tissue as possible.
Microwave ablation (MWA), destroys liver tumors using heat generated by microwave energy. With microwave ablation, a small laparoscopic port or open incision is used to access the tumor. A CT scan or ultrasonic guidance is used to pinpoint the exact location of the tumor. A thin antenna, which emits microwaves, is then inserted into the tumor. The probe produces intense heat that ablates (destroys) tumor tissue, often within 10 minutes.
Speed – Microwave ablation (MWA) is faster than RFA, destroying tumors more efficiently, and reducing the time patients remain under general anesthesia.
Simultaneous Tumor Ablation – With MWA, surgeons can ablate multiple liver tumors at the same time.
Larger Tumor Size – MWA can ablate larger tumors than are possible with RFA.
Uterine fibroid embolization (UFE) is a minimally invasive procedure used to treat fibroid tumors of the uterus which can cause heavy menstrual bleeding, pain, and pressure on the bladder or bowel.An incision, the size of a freckle, is made in your upper thigh. A tiny catheter is inserted through this incision and into the femoral artery. Using x-ray guidance, a trained physician locates the arteries which supply blood to each fibroid. Microscopic inert particles are injected into the vessels, blocking blood supply that nourishes the fibroid. Without a steady blood supply, the fibroids begin to dwindle and shrink.
Gonadal Vein Embolization is used for women with pelvic congestion syndrome (varicose veins in the pelvic area) and men with varicoceles (varicose veins in the scrotum). This minimally invasive treatment is a catheter-based technique used to treat abnormal vessels or veins.
Renal Tumor Cryoablation is a treatment used to kill cancer cells with extreme cold.
During cryoablation, a thin, wand-like needle (cryoprobe) is inserted through your skin and directly into the cancerous tumor. A gas is pumped into the cryoprobe in order to freeze the tissue. Then the tissue is allowed to thaw. The freezing and thawing process is repeated several times during the same treatment session.
Cryoablation may be used to treat cancer when surgery isn’t an option.
Chronic venous occlusion can develop as a result of underlying DVT resulting in swelling, pain, ulceration and venous claudication of the lower extremity. Venous stenting as directed a restoring normal venous flow.
Endovenous laser ablation (also called EVLT for endovenous laser treatment) is a minimally invasive procedure performed in a physician’s office or clinic, for the treatment of varicose veins. During an endovenous ablation procedures, your doctor inserts a laser fiber through the skin and directly into the varicose vein. The laser heats the lining within the vein, damaging it and causing it to collapse, shrink, and eventually disappear. Because these veins are superficial, they are not necessary for the transfer of blood to the heart. This technique typically is used to treat the large varicose veins in the legs and takes less than 30 minutes to perform.
Varicose Radiofrequency Ablation is a minimally invasive procedure performed in a physician’s office or clinic, for the treatment of varicose veins. During an endovenous ablation procedures, your doctor inserts a thin tube (catheter) through the skin and directly into the varicose vein. Radiofrequency energy is then delivered to the lining within the vein, damaging it and causing it to collapse, shrink, and eventually disappear. Because these veins are superficial, they are not necessary for the transfer of blood to the heart.
A hemodialysis access, or vascular access, is a way to reach the blood for hemodialysis. The access allows blood to travel through soft tubes to the dialysis machine where it is cleaned as it passes through a special filter, called a dialyzer.
For the treatment of DVT and to prevent a blood clot from traveling to the lungs, vascular surgeons like Dr. Julien can perform a minimally invasive procedure to break up the clot.
A port is a small medical appliance that is installed beneath the skin. A catheter connects the port to a vein. Under the skin, the port has a septum through which drugs can be injected and blood samples can be drawn many times, usually with less discomfort for the patient than a more typical “needle stick”. Ports are used mostly to treat hematology and oncology patients.
A peripherally inserted central catheter (PICC or PIC line) is a form of intravenous access that can be used for a prolonged period of time (e.g. for long chemotherapy regimens, extended antibiotic therapy, or total parenteral nutrition).
For most vascular conditions, we can perform a minimally invasive treatment called embolization. It is used to treat abnormal vessels or veins.
Under local anesthesia, possibly with a mild sedative, we make a tiny incision to access the blood stream through the femoral artery in the groin or the jugular vein in the neck. A tiny catheter and needle are inserted through the blood stream and navigated to the vein where we can address the problem.
We may need to close off a vein or insert a device to open the vein, and there are several ways to do that, depending on the location and type of vein problem.
Embolization procedures reduce or cut off the supply of blood to a tumor or abnormal growth. To perform embolization, interventional radiologists use imaging guidance to insert a catheter into a primary artery and advance it to blood vessel leading to a tumor or other area where the bloody supply needs to be blocked. Special substances which clot and form a blockage are then injected.
Embolization is often used to treat internal bleeding and help prevent heavy bleeding during surgery. In some cases, embolization may be a treatment option for difficult-to-reach, inoperable tumors. It may also be used to treat tumors that are too large to be ablated. | https://cvhealthclinic.com/about-air/services/ |
It is often necessary to use anesthesia during subgingival scaling and root planing (SRP) to control pain and discomfort (1). An injectable anesthetic is considered the gold standard in such cases and may or may not be used in conjunction with a topical anesthetic (2). While injectable anesthetics are effective in controlling pain, many patients report fear of the needle, long-lasting effects and the prolonged numbness of adjacent tissues, such as the lips and tongue ( 3 , 4 , 5 ). The need for painless, noninvasive, fast-acting anesthetics with effectiveness only during the procedure has led to the investigation of the use of substances with topical application during SRP ( 3 , 6 , 7 , 8 , 9 , 10 , 11 ) and periodontal maintenance (12). It has recently been demonstrated that the use of a topical anesthetic does not compromise subgingival treatment and offers similar benefits to pocket probing and clinical attachment gain to an injectable anesthetic (13).
Oraqix(r) is a topical anesthetic containing lidocaine (25 mg/g) and prilocaine (25 mg/g) that is commercially available as an anesthetic for subgingival SRP. This product has proven to be a reliable alternative to injectable anesthesia ( 7 , 8 , 9 ). A eutectic mixture denominated EMLA(r) has the same composition as Oraqix, but is offered at a lower cost. To date, no studies have compared the efficacy of EMLA to another inexpensive, commonly used anesthetic (2% benzocaine) during subgingival SRP procedures. Moreover, few studies have compared the effects of EMLA to injectable anesthesia using pain scores for the purposes of evaluation.
The study hypothesis is that topical anesthetics are equivalent to injectable anesthetic regarding the control of pain during SRP. The aim of the present study was to compare the effects of EMLA, injectable 2% lidocaine, topical 2% benzocaine and a placebo substance on reducing pain during subgingival scaling and root planing.
Material and Methods
Study Design and Subjects
A masked, randomized clinical trial with a split-mouth design was carried out. Forty-one patients were recruited from the Dental School of the Franciscan University Center, Santa Maria, RS, Brazil, from June 2010 to March 2012. This study received approval from the Human Research Ethics Committee of the university and all participants signed a statement of informed consent. The registration number is NCT01860235 (www.clinicaltrials.gov).
For gingivitis treatment, all patients received two to four supragingival scaling and polishing sessions and meticulous self-care oral hygiene training. After treatment and the achievement of the goal of a low percentage of visible plaque and gingival bleeding (<15%), an examination was performed for the assessment of probing depth (PD), clinical attachment level (CAL) and bleeding on probing (BOP) at six sites per tooth to determine the sites that required SRP treatment. Eligible individuals were selected from those requiring SRP with at least two teeth in four sextants with ≥1 site with PD and CAL ≥5 mm and BOP following treatment for gingivitis (without marginal bleeding). The other inclusion criteria were aged 18 years or older, adequate understanding of the pain scales employed and no history of previous periodontal treatment. The following were the exclusion criteria: history of allergies or sensitivity reaction to any amide or ester anesthetic; having received anesthesia or sedation 12 h prior to SRP; be using pain medication (i.e., sedative, muscle relaxant, anti-inflammatory medication, and narcotic analgesic); ulcerations or abscesses in the oral cavity; oral disease with immediate need for surgery; history of alcohol abuse; current pregnancy; uncontrolled hypertension; and participation in a clinical trial of an investigational drug within four months after the onset of the present study.
Experimental Design
Figure 1 displays the flow chart of the study. After the clinical parameters had been recorded, the four sextants containing teeth with the deepest PD were chosen to participate in the experiment. Treatment with SRP was performed over six weekly sessions with the experimental procedures being conducted during the first four sessions. Therefore all patients received the four interventions. The two teeth with the deepest PD were selected from each sextant. Following the treatment of those teeth and administration of the pain scales, SRP was performed on other teeth in the sextant that required the procedure using the same type of anesthetic.
Randomization was conducted by a researcher not involved in the eligibility and entry of subjects into the study to warrant treatment allocation concealment. Block randomization was performed for the allocation of the participants to the different groups: injectable 2% lidocaine with epinephrine 1:100,000 (Alphacaine; DFL, Rio de Janeiro, RJ, Brazil); topical 5% eutetic mixture of 25 mg/g of lidocaine and 25 mg/g of prilocaine (EMLA; AstraZeneca, Cotia, SP, Brazil); topical 200 mg/g of 2% benzocaine (Benzotop; DFL) or a placebo substance with same appearance and viscosity as the topical anesthetics. One set of opaque envelopes contained a card stipulating the sextant to undergo treatment and a second set of envelopes contained a card stipulating the type of anesthesia to be administered. Each participant received an envelope from each set immediately prior to the start of the subgingival SRP session.
The technique used for injectable anesthesia was nerve block with at most two anesthetic cartridges per session SRP. When a topical substance was administered, the patient was masked to the type. All anesthetics were administered by the same operators (BC and DNF). Prior to administration, relative isolation was performed with cotton rolls. The selected anesthetic was applied to the two teeth with a deepest PD and BOP (regardless of the severity of PD). For topical anesthetics, the maximum dose used in the sextant was 2.5 mL. The anesthetic was applied directly into the periodontal pocket of each tooth with a millimeter syringe and a blunt needle and inserted until overflowing the gingival margin.
Different operators performed the subgingival SRP and assessment of the main outcome. The operator who performed the subgingival SRP was blinded to the type of anesthesia to ensure that the procedure was performed similarly in all groups. For the operators masked were put under the clinic table all anesthetic types and it was requested him/her to walked away from dental chair until that was completed the anesthesia so that operator does not know the type of anesthesia that was administered. Two minutes after administration of the anesthetic administration, the operator began the SRP procedure in the sextant using curettes and periodontal files (Neumar, São Paulo, SP, Brazil). For each tooth, the patient was asked to indicate the intensity of pain experienced during the operation with the aid of a visual analog scale (VAS) five min after the onset of the procedure (VAStrans) and immediately following the procedure (VASpost). At the end of the procedure, the patient was also asked to describe the pain using a verbal scale (VS): no pain (0), mild pain (1), moderate pain (2), severe pain (3) or extremely severe pain (4). Both scales were scored in the absence of the operator that had performed the subgingival SRP. In the occurrence of pain during the procedure, an additional dose of the same anesthetic was administered. If pain persisted after this second application, injectable nerve-block anesthesia was administered. All information was recorded on the patient charts. Subgingival SRP was performed until the root surfaces achieved adequate smoothness. The duration of the procedure on the selected teeth was also recorded. Patient satisfaction with the anesthesia was determined at the end of all treatment sessions using the following four categories: very satisfied, satisfied, dissatisfied and very dissatisfied.
When the patients returned for the next treatment session, they were asked about the occurrence of pain, discomfort, localized ulceration, edema or flaking of the oral mucosa.
Intra-examiner and Inter-examiner Reproducibility
Prior to the main study, two examiners underwent a training and calibration exercise for the determination of reliability regarding the periodontal variables and administration of the anesthetics and pain scales. Five patients were used for the determination of intra-examiner and inter-examiner reliability regarding the PD and CAL measures. Agreement was determined using the weighted kappa (K) statistic. K values for intra-examiner agreement regarding PD and CAL were 0.79 and 0.76 for Examiner 1 and 0.74 and 0.80 for Examiner 2, respectively. K values for inter-examiner agreement were 0.75 and 0.78 for PD and CAL, respectively. Both intra-examiner and inter-examiner reliability were determined a second time eight months after the onset of the study and all K values were ≥0.8.
Sample Size
The sample size was calculated based on a clinically relevant difference in VAS scores (15 mm) between groups with a standard deviation (SD) of 25 mm. Considering a significance level of 5%, 90% study power and the paired design, it was determined that a minimum of 31 patients were needed for each group. This number was increased to 41 patients to compensate for a possible 25% dropout rate.
Data Analysis
Data were analyzed using the Statistical Package for Social Sciences (SPSS), version 20.0.0. The Shapiro-Wilk test was used to determine the normality of distribution. As normal distribution was demonstrated, the data were expressed as mean and standard deviation values. Repeated-measures analysis of variance, chi-squared test and Tukey's post-hoc test were used for the comparisons of the groups (p<0.05). Spearman correlation coefficients were calculated to determine the strength of the correlation between the VAS and VS. Poisson regression with robust variance was performed to compare patient dissatisfaction with different anesthetic modalities.
Results
Among the 41 individuals enrolled, nine did not complete the study. The reasons and the timing for the dropouts are specified in Figure 1. Mean age of participants was 49.4±9.4 years. Table 1 displays the PD and CAL values after gingivitis treatment and operating time (min). The average time required for root planing of two teeth was approximately 60 min. No statistically significant differences among groups were found for these variables. Further analysis showed no statistically significant differences in the allocation sequence of anesthesia modalities between individual participants (p=0.78).
Table 2 displays the VAStrans, VASpost (0 to 100 mm) and VS score among the different groups. Regarding VAStrans, a significantly lower score was found with injectable 2% lidocaine in comparison to the benzoncaine and placebo groups and a lower score was found in the EMLA group in comparison to the placebo group. Significantly lower VASpost and VS scores were found in the injectable 2% lidocaine and EMLA groups in comparison to the 2% benzocaine and placebo groups. Most patients receiving injectable 2% lidocaine (87.5%) and EMLA (84.4%) reported no pain or mild pain during subgingival SRP. In contrast, approximately half (53.1%) of the patients receiving benzocaine and two thirds (71.8%) of those receiving placebo experienced at least moderate pain (Fig. 2).
*Repeated-measures ANOVA/ #Tukey's post-hoc test. a-b: significant differences; a-a and b-b and c-c: without significant differences.
Approximately 72% of patients required a second anesthetic application when exposed to the placebo, while a significantly lower percentage (37.5%) of patients receiving EMLA had to be re-anesthetized. In addition, more than 90% of patients receiving EMLA, 75% of patients receiving benzocaine and 50% of patients receiving placebo did not require injectable anesthesia (Table 3).
*Chi-squared test/#Tukey's post-hoc test. a-b: significant differences; a-a and b-b: without significant differences.
Most individuals reported satisfaction with the injectable anesthetic and EMLA during subgingival SRP. In contrast, more than half (59.4%) of the patients receiving benzocaine and nearly two thirds (65.7%) of those receiving placebo felt dissatisfied with the anesthetic. The multivariable model showed that patient dissatisfaction with benzocaine anesthesia and placebo was approximately 10 times greater than injectable anesthetics, even after adjustment for gender, age, operative time and PD. (Table 4). No significant difference in patient dissatisfaction was detected between EMLA and injectable anesthetic.
*Z-test with p adjusted by Bonferroni test: a-b: significant differences; a-a and b-b: without significant differences. **Adjusted for gender, age, operative time and probing pocket.
Strong correlations were found between the responses of the VAS and VS for nearly all anesthetic procedures (r=0.841; p<0.0001).
Most individuals reported no adverse effects from the different anesthetics tested, such as pain, discomfort, ulceration, edema or flaking. Two patients who received benzocaine and three who received EMLA reported numbness in the glottis region, but did not report any discomfort or dissatisfaction. However, as demonstrated in the flow chart, one patient in the benzocaine group and two patients in the placebo group left the study due to the pain they experienced after the SRP procedure.
Discussion
It is well accepted that injectable anesthesia is the first choice for routine SRP procedures ( 2 , 14 ). However, needles are associated with pain, anxiety and fear ( 1 , 4 ). As a result, some patients prefer mild or moderate pain during SRP rather than receiving an injection (3). In the present study, EMLA was found to provide topical anesthetic effectiveness similar to injectable lidocaine and better than topical benzocaine or a placebo substance. However, in the EMLA group, anesthesia had to be repeated in almost half of the patients to be effective.
A number of studies evaluating topical intrapocket anesthesia have used Oraqix, which was developed for periodontal use and has demonstrated satisfactory effectiveness for subgingival SRP ( 3 , 7 , 8 , 9 ). Oraqix was found to be safe and effective at doses ranging from 3.5 g (2.5 mL) (15) to 8.5 g (6 mL) (16) per session of subgingival SRP. Studies have demonstrated the superior effect of EMLA in comparison to placebo substances in reducing discomfort of during dental procedures (11 , 17 - 19). EMLA also results in less pain and discomfort during treatment for mild chronic periodontitis in comparison to a placebo (11), with similar results as those obtained with a lidocaine adhesive and both anesthetics have proven better than electronic anesthesia in reducing pain during subgingival SRP (10). Moreover, EMLA significantly reduces pain during SRP with manual curettes and ultrasonic scalers in comparison to manual curettes alone without any anesthetic modality (6). The present findings are in agreement with these data and provide additional information, as this study offers a comparison of the effectiveness of EMLA, injectable 2% lidocaine, topical benzocaine and a placebo substance.
Pain intensity is often assessed using a VAS or VS (20). In the present study, the VAS was always used before the VS to avoid the influence of the verbal statement expressed with the latter. A VAS is useful for comparing pain in the same individual (20). Both scales demonstrated that injectable anesthesia and EMLA resulted in lesser pain during the SRP procedures in comparison to the other methods tested. Moreover, no differences were found between EMLA and injectable lidocaine. No previous study has compared these two products using on pain scales. Van Steenberghe et al. (3) found that injectable anesthetics led to lesser pain in comparison to a heat-activated gel with the same composition as EMLA. Nonetheless, 70% of the patients in the study preferred the topical anesthesia due to the lesser discomfort, duration and numbness in surrounding tissues, which are inherent characteristics of injectable anesthesia and can affect activities of daily living. Moreover, patients undergoing periodontal maintenance also prefer a topical anesthesia ( 12 , 21 ). These findings may also explain the satisfaction with EMLA found in the present study. Notably, 12 individuals having received the injectable anesthetic reported mild to moderate pain, which may have been due to confounding the pain sensation with discomfort during the SRP procedure (2).
The VS scores revealed that patients treated with 2% benzocaine or placebo felt more pain than patients treated with EMLA and injectable lidocaine, suggesting that EMLA is a suitable anesthetic. These findings are in agreement with previous data showing that patients treated with lidocaine and prilocaine experience less pain than patients receiving a placebo substance, using a VS for pain assessment (7,8,9,11). Moreover, strong correlation was found between the results of the VAS and VS, which is also in agreement with data described in a previous study (10).
Approximately 72% of the placebo group required a second topical anesthetic during SRP compared to 37.5% of patients in the EMLA group. Moreover, 50% of patients who received the placebo, 25% who received benzocaine and 6.2% who received EMLA reported pain intolerance after the second administration topical anesthesia. These findings are in agreement with previous studies comparing a 5% anesthetic gel with placebo (7-9). However, one study compared EMLA with electronic anesthesia and a lidocaine adhesive and no patient required a second application of either topical or injectable anesthesia (10).
The present study was a randomized clinical trial, which is the gold standard for evaluating interventions due to the lesser chance of bias. However, the split-mouth design may lead to the confounding of the treatment effects with carry-over effects. In the present study, a seven-day interval was respected between SRP sessions. Since the main outcome (pain) presented a quick response and reversibility with no chance of residual effects from the anesthetics after seven days, it is quite likely that the washout period was sufficient for excluding any influence of one treatment over another (carry-over effect). The residual effect of the pain experience, however, cannot be estimated (22). Another problem with the split-mouth design is the need for patients with symmetrical disease patterns, which can encumber the recruitment process. However, this was not a problem in the present study, since most of eligible participants met the inclusion criteria. The advantages of the split-mouth design are that fact that a paired analysis requires a smaller number of participants in comparison to parallel study groups (23) and the comparison of anesthesia in the same individual eliminates the effect of confounding variables, as each participant serves as his/her own control.
Although subgingival SRP was performed by different operators, this fact likely did not exert a substantial influence on the results, as the same operator always performed the procedures on the same individuals. Furthermore, each operator received detailed training prior to the onset of the study and all SRP procedures were monitored by two professionals experienced in the periodontics (RPA and FBZ).
Although the number of dropouts was not small, it was similar to figures reported in other clinical trials involving the follow up of patients submitted to dental procedures. The number of patients who dropped out of the study after benzocaine (n=3) or placebo (n=3) was slightly higher than the number who dropped out after EMLA (n=2) or the injectable anesthetic (n=1). One third of the patients reported dropping out due to pain and it is plausible that the same occurred among those who did not return for the follow-up evaluation. However, as all patients received all treatments and no imbalance among the groups was found regarding the time of the dropouts, it is believed that this did not lead to selection bias. Our results were based on per-protocol analysis. It was performed an intention-to-treat analysis because it could increase the risk of falsely claiming noninferiority (type I error), as it often leads to smaller observed treatment effects (24).
The topical anesthetic employed herein is known to have a short duration (15 to 20 min) (15). However, mean operating time was nearly 30 min for each tooth. This longer operating time in comparison to that reported in other studies involving topical anesthetics (8-10), likely explains the higher pain scores. It should be pointed out that mean pocket depth and operating time were similar among the anesthetic modalities, which reduces the possibility of bias.
Eligible individuals were required to have four sextants in which at least two teeth had a PD and CAL ≥5 mm. Treatment was divided into two phases: treatment after treatment for gingivitis followed by treatment for periodontal disease. This approach allowed more time for the patients to learn good oral hygiene techniques and reduced the complexity of the subgingival treatment (25). Moreover, having least eight teeth (2 per sextant) with deep PD sites after gingivitis treatment was one of the eligibility criteria and the results demonstrate that deep sites can be effectively anesthetized with EMLA. Previous studies only included patients with VAS scores ≥30 mm upon periodontal probing ( 9 , 10 ). No pain threshold was used in the present study in order to not restrict the findings to sensitive individuals, which would reduce the degree of external validity.
In conclusion, EMLA exhibited good effectiveness compared to the injectable anesthesia and performed better than 2% benzocaine in SRP. Thus, EMLA is a viable anesthetic option during scaling and root planning, despite the frequent need for second application. | https://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-64402015000100026&lang=en |
Purpose: To evaluate the efficacy of topical anesthesia as an alternative to peribulbar or retrobulbar anesthesia in posterior vitrectomy procedures.
Methods: Posterior vitrectomy using topical anesthesia (4% lidocaine drops) was performed prospectively in 134 eyes (134 patients) with various vitreoretinal diseases, including severe proliferative diabetic retinopathy (n = 69), vitreous hemorrhage (n = 12), rhegmatogenous retinal detachments (n = 11), epiretinal membranes (n = 10), macular holes (n = 7), dislocated crystalline lens or intraocular lens (n = 6), giant retinal tears (n = 5), intraocular foreign bodies (n = 3), trauma (n = 3), endophthalmitis (n = 3), subfoveal choroidal neovascular membrane (n = 3), and neovascular glaucoma (n = 2). In 26 (19.4%) eyes, posterior vitrectomy was combined with a scleral buckling procedure, and in 84 (62.6%) eyes, argon laser photocoagulation was performed. Preoperative and intraoperative sedation of varying degrees was necessary. Subjective pain and discomfort were graded from 1 (no pain or discomfort) to 4 (severe pain and discomfort).
Results: All patients had grade 1 pain and discomfort during most of the procedure. All patients had grade 2 (mild) pain and discomfort during pars plana sclerotomies, external bipolar cautery, and conjunctival closure. The average amount of 4% lidocaine drops needed during each procedure was 0.5 mL. No patient required additional retrobulbar, peribulbar, or sub-Tenon anesthesia.
Conclusions: This technique avoids the risk of globe perforation, retrobulbar hemorrhage, and prolonged postoperative akinesia of the eye. With appropriate case selection, topical anesthesia is a safe and effective alternative to peribulbar or retrobulbar anesthesia in three-port pars plana vitrectomy procedures. | https://pubmed.ncbi.nlm.nih.gov/10696746/ |
We are looking for an anesthetic. He/she should be able to use intravenous, local, caudal or spinal methods to administer sedation or pain medications during surgical and other medical procedures and provide and sustain airway management and life support during emergency surgery of the patient.
Roles and Responsibilities
- They ensure that the patient is fit enough to undertake an operation before the surgery.
- They agree on an anesthetic plan.
- They make sure that patients understand what will happen during and after the operation.
- They should be able to get patients ready for surgery.
- They should provide anesthesia, providing safe pre-operative care.
- They must be able to provide pain relief to patients using anesthetics and analgesics.
- They continue anesthesia in the operating theatre when necessary.
- They can monitor patients while they’re under anesthesia to make sure they remain in a stable condition.
- They must check their blood pressure, heart activity, oxygen and carbon dioxide levels, breathing, and body temperature.
- They should be able to resuscitate and stabilize patients during emergencies.
- They must know how to reverse anesthesia and relieve and manage post-operative pain to support patients’ recovery.
- They should provide care for patients in chronic pain.
- They should be able to work with a range of other health professionals, such as surgeons, operating department practitioners, theatre nurses, and to ensure patient wellbeing.
- They must perform administrative tasks in areas related to patient care, including summaries of patient treatment and the writing of discharge letters, and more.
- They should participate in an agreed on-call rota and take on an equal share in providing emergency cover.
- They should take part in training, teaching, and supervising more junior staff in both critical care and anesthesia. | https://doctifyindia.in/job/job-listing-anesthesiologist-doctor-near-me/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.