content
stringlengths
71
484k
url
stringlengths
13
5.97k
While a teenager might be less than thrilled by the idea of their dad using Facebook, there is a huge importance to social networks. And while Facebook might be what we associate the term with in the 21st century (The Social Network), social networks are broadly defined as a network of family, friends, and other personal contacts that are connected by interpersonal relationships. Despite their importance, relatively little is known about the role social networks play in the lives of low-income fathers. A recent study by Mathematica Policy Research attempts to fill this information gap by looking at low-income fathers’ connections to friends and family, the type of support these connections provide, and resources fathers access for help. As part of the Parents and Children Together (PACT) evaluation, Mathematica researchers did three rounds of in-depth interviews with fathers enrolled in one of four Responsible Fatherhood (RF) programs. The men in the PACT evaluation were predominantly African American and faced an array of challenges including low levels of education and employment, as well as past involvement in the criminal justice system. The RF programs in which these fathers take part are designed to promote and encourage the involvement of fathers in the lives of their children. They offer services that focus on parenting and fatherhood, economic stability, and healthy relationships and marriage. In comparison with national norms, the study found that fathers in the PACT evaluation have less robust social networks. On average, the fathers in the study described social networks that consist of only five friends and family members. This is far below the national average of 23 core ties reported in the Pew Internet and American Life Project. Although the average size of the social networks of the fathers in the study was small, the size and makeup of individual fathers’ social networks varied -- some had relatively robust social networks while others were extremely disconnected. During the interviews, the fathers in the study described how much they valued the support of the other fathers in their RF program. They reported appreciating knowing that they were “not alone,” and that their problems “weren’t so bad” in comparison with those of other fathers. However, despite the high value fathers placed on these interactions, very few men said they formed connections there that continued after participation in the program had ended. The study also found that fathers used the social networks for four main types of support: emotional, financial, in-kind, and housing. They overwhelmingly cited family members and friends as providing emotional support. Fathers with smaller and less diverse social networks reported utilizing fewer types of resources. Meanwhile, those with larger networks more frequently used all four types of support. In addition to the RF program in which they had enrolled, most fathers typically reported using supports from two other organizations that were public entities or nonprofits. Fathers with larger social networks named twice as many organizational sources of support as fathers with fewer or no social ties did. The negative consequences of smaller social networks may include having less access to resources, and having less social capital to draw upon. Of the men with no ties, a few reported feeling a “void” because they were disconnected from their family members. A couple of these highly disconnected fathers mentioned that having no friends contributed to feelings of loneliness. Interestingly, the study showed that low-income fathers with relatively weak social networks did not appear to make up for their lack of social network connections by relying on more organizations for support and resources. However, research on social networks finds that, regardless of income, women tend to report larger and more diverse ties than men and that mothers compensate for weak social ties by connecting to organizations for help. According to Castillo and Fenzl-Crossman, fathers’ social networks are significantly and positively correlated to their involvement with their children. And as I discussed in a recent blog post, a father’s involvement is central to the healthy development of their children. The report acknowledges that most fathers both need and want the kind of support that social networks provide. Moving forward, this study has shown that it is important that RF programs take steps to provide strategic and targeted outreach to low-income fathers. One simple solution mentioned in the report was that in order to address the social isolation experienced by many of these fathers, RF programs may wish to consider hosting ongoing peer support groups that continue after the program is over. Regardless of where these efforts start, fathers need the kind of support that social networks provide in order to fully engage as parents.
https://www.newamerica.org/education-policy/edcentral/importance-social-networks-among-low-income-fathers/
The Chatham Bridge rehabilitation is finished, the Virginia Department of Transportation announced this week. The $23.4 million project improved the condition of the Route 3 Business bridge over the Rappahannock River, which carries around 16,000 vehicles a day and connects Stafford County and the City of Fredericksburg. Construction started in June 2020, and the bridge reopened to traffic on Oct. 10, 2021. Project contractor Joseph B. Fay Co. completed the bridge deck repairs at an accelerated pace ahead of contract requirements to reopen the bridge to traffic as soon as possible. The project team replaced the bridge deck and travel surface, and repaired the bridge approaches and substructure. After the detour was lifted in October, repair work continued underneath the bridge. Before the project, Chatham Bridge was posted with a 15-ton vehicle weight limit, which has been removed. Vehicles of all legal loads can cross the bridge, including heavier-weight emergency-response equipment. Pedestrians and cyclists have been able to use the new 10-foot-wide shared-use path on the bridge since October. The new path is separated from the travel lanes by a barrier. With the project’s completion, the path now connects with a Stafford trail along River Road that passes underneath the Chatham Bridge. Pedestrians and cyclists now have uninterrupted, dedicated pedestrian facilities from Pratt Park in Stafford that cross the river and connect with the sidewalk and trail network in the City of Fredericksburg. The shared-use path also has a scenic overlook of the Rappahannock River at the bridge midpoint. New LED lighting was added, and matches the style of existing light posts in the downtown Fredericksburg area. The replacement concrete bridge rail retained the existing open-view appearance. As part of the project, permanent road improvements were made in advance along the signed detour route. The left-turn lane on the Dixon Street exit ramp from the Blue and Gray Parkway in Fredericksburg was extended in January 2020 to hold a greater number of vehicles, which improved driver access to the right turn lane. Bravo to everyone associated with this project!
https://blog.fredericksburgva.com/pedestrian-trail-opens-under-chatham-bridge/?utm_source=rss&utm_medium=rss&utm_campaign=pedestrian-trail-opens-under-chatham-bridge
Hopson Road and Church Street Rail and Roadway Improvements C‐13 STIP U‐4716 FONSI Name Comment Response similar to the recently constructed railroad bridge over NC 54, subject to cost participation from local governments. The new rail bridge should be "wide" enough to accommodate not only the existing rail line, but also additional lines to support Triangle Light Rail, Regional Light Rail and High‐ Speed Rail. A minimum of 3 tracks may be necessary. This project proposes a 2‐track bridge and does not preclude another bridge to be built for future regional light rail. The currently proposed design for Hopson Road will accommodate a future crossing of Triangle Transit railroad tracks over Hopson Road with a minimum 17‐foot clearance. C‐32 Pete Schubert Email Rc’vd 2/1/10 The new railroad bridge over Hopson Road must be long enough to accommodate the future (i.e., improved) Hopson Road, including but not limited to: 5' sidewalks on both sides of the road to safely accommodate pedestrians, and 4' wide bicycles lanes or 14' wide outer lanes to more safely accommodate bicyclists in both directions. Note that this is in addition to any planned capacity widening (additional travel lanes) and center median or turn lanes for Hopson Road. The design to include 5' sidewalks should be irrespective of a interlocal agreement with the City of Durham, i.e., the design should anticipate the need for sidewalks and plan for their eventual construction, whether incidental to this project or in the future.” The ultimate typical section for Hopson Road in the project area includes 16‐foot outside travel lanes, exclusive of the 2‐foot gutter, that will accommodate bicyclists. Hopson Road and Church Street in the project area include 5‐foot sidewalks. The new railroad bridge over Hopson Road must also be wide enough to accommodate the planned and future expansion of rail capacity within the rail corridor, to include not only existing and future freight traffic, but local and regional light rail and regional high speed rail. It is anticipated that a minimum of 4 tracks will be needed on the bridge. Given this track capacity, careful attention should be paid to "daylighting" the roadway under the bridge, both to reduce the tunnel effect and to enhance user safety for bicyclists and pedestrians. A combination of natural lighting provided by openings between tracks to let in daylight and artificial lighting at night, should be included.” The proposed project would construct a railroad bridge over Hopson Road that would accommodate two tracks; the existing track and a future railroad siding. Additional new tracks that may be constructed by Triangle Transit or other would need to be carried over Hopson Road on a separate structure. The design of Hopson Road would accommodate a future crossing to the east that would provide a minimum 17 feet clearance under the structure. During final design, NCDOT will coordinate with the City of Durham and lighting will be provided where appropriate. At present, and increasingly so with the planned closing of the Church Street connection to Miami Boulevard, Hopson Road is a significant east‐west bicycle connector route from RTP to points east, northeast, and southeast. While any temporary closing or detour of Hopson Road during construction will present but a minor inconvenience to motor vehicles, this will create a significant burden for human powered vehicles. As such, it is critical that Hopson Road remain open and passable to cyclists during the entire period of construction. At a minimum, all travel lanes should be smooth paved with screeed and rolled hot mixed asphalt wearing course (not graveled) and should be swept regularly to control dust, mud, and loose material, all of which create significant road hazards to cyclists.” NCDOT will try to reduce the impact of the closure to the traveling public while maintaining safety during construction.
http://digital.ncdcr.gov/cdm/compoundobject/collection/p249901coll22/id/482808/show/482798/rec/15
Engineers are using lightweight concrete as part of the work to repair bearings on an iconic bridge in Peterborough. The £5M job to restore the Nene Bridge in Peterborough has produced an innovative solution for the repair of four of its eight V-shaped piers while preserving its distinctive appearance. Indeed, it is claimed the project is the first in the UK to have reinforced concrete jackets constructed around V-shaped piers to preserve the aesthetics of a structure. Skanska, the contractor responsible for undertaking the bridge repair for client Peterborough City Council, is also using a concrete mix that is strong enough to strengthen piers and carry jacking loads but light enough to minimise the additional load on the structure’s foundations. Repairs to the road bridge, which was built in the 1970s, have been triggered by recent inspections which have shown signs of structural distress to the bearings and cracking in the saddles of the piers. In addition, some of the piers have begun to show signs of strain. This posed a major problem for the council since the bridge provides a crucial connection for vehicles travelling between the A1 and the A47. It is also an important link for pedestrians and cyclists travelling between the north and south of Peterborough. “This bridge is a key part of the city’s major route network, carrying in excess of 60,000 vehicles each day and it is vital this iconic structure is strengthened, so it can continue to be used without any restrictions for decades to come,” says head of Peterborough Highway Services Andy Tatt. Four of the 155m long bridge’s piers are in the River Nene while the other four on land located either side of a railway. Work is being carried out by Skanska on all four of the land piers and two of the river piers. The four land piers are each constructed on capped piled foundations while the two river piers being worked on are on spread footings. Replacing the bearings is a major challenge. The piers do not have enough space at their tops for positioning temporary jacks to lift the 9,200t deck so that the bearings can be replaced. In addition, the steel box girders forming the bridge spans have no jacking points or are unable to withstand temporary forces exerted by jacking. “There was no jacking stiffness within the existing box girders and also no jacking points on the existing piers,” explains Skanska site agent Dan Wood. “There is no solution in the original structure to replace the bearings,” he adds. This is where the reinforced concrete jackets come in. The idea started with Skanska’s design team developing a plan to encase the piers in reinforced concrete jackets, strengthening and broadening them and providing jacking platforms from which to raise the structure, and allow the bearings to be replaced. For the solution to work, the concrete used in the jackets must be strong enough for the piers to carry the jacking loads, but light enough to minimise the additional load on the foundations. “It would have been a lot easier just to wrap the whole thing in a big block of concrete,” says Skanska project design team leader Stuart Watkins. But he says this solution would have been untenable if the bridge’s unique design was to be respected. And if standard concrete was specified it would have led to the reinforced concrete jackets adding 100t to each of the bridge piers – too much for the foundations to bear. “We did some analysis and worked out what level of additional load we felt comfortable adding to the piers. It works out to be about 10% of additional weight on top of the piers without having to start modifying the existing foundations and making them stronger. We did not want to do that because some of these foundations are 6m or 7m underneath the riverbed,” adds Watkins. So, Skanska turned to materials specialist Aggregate Industries. It suggested the use of Lytacrete, a concrete mix using Lytag, a lightweight secondary aggregate. The mix was refined following trial pours to tailor its use at Nene Bridge. The design strength of the concrete is 50N/mm², which is the strength required to take the anticipated loads from the jacking procedure. Lytag is a synthetic lightweight aggregate, produced by sintering pulverised fuel ash at roughly 1,200°C to 1,300°C to create spherical, chemically inert pellets with a lighter weight porous structure. Using Lytag instead of traditional aggregate reduces the dead load of the jackets by 25%, while offering the same level of structural performance. The lightweight, flowable, self-compacting concrete reduced the weight added to the bridge piers from 100t to 60t. “The river pier foundations are deep. So that was one the reasons we manipulated the design to make the jackets the size they are, to try and ensure we stayed within that 10% additional weight. Otherwise the risk would be that the piers would start to settle, meaning they would lower and sink into the ground slightly,” says Watkins. Work began in April last year when steel reinforcement cages were wrapped around the piers. The distinctive V-shaped appearance of the original piers has to be retained and this had to be accommodated by the complex formwork to be built up around each pier. As two of the supporting piers are within the river bed, a temporary coffer dam has also been built so that they can be accessed below water level. The steel reinforcement varies from 16mm diameter to 25mm depending on its position within the structure. Concrete was installed using an M20 pump. In addition, Skanska has externally strengthened the bridge’s four steel box girders to accommodate jacking by fixing a fabricated steel loading bracket to its outside face. The brackets have been positioned to suit the existing stiffener locations within the steel box beams to ensure it is strong enough when completing the jacking of the bridge. This method was selected to avoid the need to complete fabrication and welding within the confined space of the box girders, which site workers say poses significant health and safety risks. A key aspect of the project is extensive use of 3D modelling software. “We came with these SketchUp [software] drawings to try to put a series of phases together for the project, and then from there we were able to get more of an idea on pricing, the programme, how long the programme was going to take and what contractors we would need to get involved,” says Skanska project manager Scott Blackburn. So far, Skanska has fully encased the four land piers with concrete jackets but has yet to complete the work on the two river piers to be strengthened. Bearing replacement has not yet begun. Work is on schedule to be completed by October 2019. Once complete, the reinforced piers are expected to last for approximately 50 years.
https://www.newcivilengineer.com/tech-excellence/projects-nene-bridge-strengthening/10040852.article?blocktitle=Most-commented&contentID=-1
Although most of us will usually associate accidents with lorries with what happens out on the open road, many incidents with damaging consequences often occur within the confines of the workplace itself which is why health and safety is of paramount importance if you operate a workplace which sees the comings and goings of lorries and other vehicles on a regular basis. There can be wide ranging legislation in place with regards to lorries in the workplace and health and safety depending on the type of operations your company carries out. Many aspects are covered under the Health & Safety at Work Act 1974 which requires a company to ensure the safe use, handling and storage of all vehicles and their loads but if that also means dangerous substances, then companies will also need to adhere to the Control of Substances Hazardous to Health Regulations as well other legislative measures which might be in place. If you’re operating a construction site, for example, it must be designed in such a way to allow the safe movement of both vehicles and pedestrians under the Construction (Health, Safety & Welfare) Regulations 1996. Before putting health and safety procedures into place, it’s firstly necessary to carry out a full risk assessment of the site. A company must look at how their lorries are used and to look at the ways in which they might come into contact with pedestrians, for example. This might mean that action needs to be taken to deal with potential risks such as introducing speed bumps, strictly enforced speed limits, inside the compound, one way systems with separate entry and exit points, putting up clear warning signage and it might also incorporate further driving training. Other areas of concern will include things like adequate warning or alarm systems where there may be potential to be crushed or trapped by lorries or other vehicles such as forklift trucks. High visibility clothing is also required for all those who work in, on or around vehicles, even if you’re not necessarily driving the vehicle yourself. Loading and towing operations will also need to be considered as well as ensuring that vehicles cannot move or be moved unintentionally and, obviously, there is also legislation in place with regard to the road worthiness and regular maintenance of lorries, driver training and the hours of driving which are permitted in any one period. It’s important to make sure that a load is secure on board a lorry and that it doesn’t pose any health and safety risks to either the driver, other workers or members of the public. The Department of Transport have issued a Code of Practice in this regard which looks at this issue in greater depth but it includes things like assessing how a load might move inside the vehicle during transit and how to prevent that occurring, how strapping and chains should be used to secure a load and how to ensure that drivers have safe areas in which to load and unload within a depot. Drivers should also be encouraged to keep a record of their loads and to report any incidents that might have compromised safety - near misses with other vehicles and pedestrians, for example. The operations of lorries in the workplace are always going to be subject to changes in working practices. There may be a need for additional vehicles at certain periods of the year – when a company is busy, say at Christmas, for example or a company might introduce a brand new fleet of vehicles. Therefore, it’s imperative that a fresh risk assessment is carried out at regular intervals and particularly whenever new practices are put into place in order to minimise the dangers and to protect the safety of the workforce.
http://www.workplacesafetyadvice.co.uk/lorries-in-the-workplace-and-load-security.html
The purpose of pavement is simple: to support traffic loads. Or so most would think… But what about the space below pavement? And what about the purpose pavement has as a base around urban trees? How can street trees be provided for underground with all the structural requirements of engineering a roadway? In this article, we discuss the engineering behind what maintains the structural integrity of pavement and look at how street trees can be sufficiently supported without compromising structural integrity. Paved structures provide a smooth surface for vehicles, bicycles, and pedestrians to improve efficiency and comfort. Pavement usually consists of numerous layers, placed over the in situ material, which work together to support traffic and environmental conditions. The surface layer may be made of asphalt, concrete, aggregate, or interlocking blocks. Concrete provides a rigid paved structure while most other pavements are flexible. Composite pavements are often the result of pavement rehabilitation, and exist consisting of both flexible and rigid elements. Paving materials are also porous (permeable) or nonporous (impermeable). Permeable materials have open voids between their particles or units that allow water and air movement around the paving material. Although some permeable paving materials are indistinguishable from impermeable materials, their environmental impact is quite different. Permeable paving materials include pervious concrete, asphalt, single sized aggregate, resin bound paving, and open-jointed blocks. A major benefit of permeable paving is its contribution to growing healthy urban trees through the admission of vital water and air to their rooting zones. Permeable pavements behave like a natural soil surface enabling the soil moisture to fluctuate with rapid wetting followed by drying and re-aeration. Other advantages of porous paving include better management of urban stormwater runoff, resulting in less pollutants through the capture and breakdown in the subgrade, and control of erosion and siltation. Disadvantages include the inability of permeable pavements to solely handle large rainfall events, as well as possible soil contamination, and in some instances climate limitations also arise due to road salt ineffectiveness on porous surfaces. Cost, longevity, and maintenance are also issues to consider. Most of these potential concerns, however, can be managed by integrating permeable pavements with standard stormwater facilities and prudent planning of porous locations. The subgrade is the soil underneath any paved structure and bears the load of any traffic on the surface. Subgrades are comprised of naturally occurring earth, disturbed existing soil, and/or fill brought from elsewhere. Even though the subgrade provides the main support of a paved surface, it is the structurally weakest component. For a pavement structure to be durable, it must protect the subgrade from deforming and does this by spreading the load over the subgrade. The site subgrade characteristics are an important factor in pavement engineering, due to the fact that natural soils vary for any given site. Subgrade is typically composed of clay, silt, gravel, and sand, each of which has different chemical properties and particle sizes. The total load on a paved surface comprises the overall weight of individual loads such as vehicles and pedestrians, and the frequency of such events over time. This range of loads is conveyed in terms of a common unit of measurement, referred to as a standard reference vehicle. Any vehicle can be related to the reference vehicle by its EWL (equivalent wheel load) or ESAL (equivalent single-axle load). The overtime deterioration of pavement is directly associated with traffic load expressed as equivalent single-axle load. When a paved structure is designed it’s predicted ESAL is accounted for and its lifetime is calculated. Once the end of its estimated lifetime arrives, the pavement is assumed to require rehabilitation in one form or another. Paved areas in cities must be engineered to tolerate loads in accordance with applicable standards. Heavy emergency vehicles, such as fire trucks, must be able to access properties without causing severe pavement failure. Where underground tree pits are utilized, they must be capable of withstanding extreme loads while providing sufficient volumes of uncompacted soil for root growth. In addition to direct vertical loads, paved surfaces are also subject to significant lateral force. Continual vehicular traffic can also cause road pavement to deteriorate adjacent to tree pit areas unless prevented. It is vital that engineered space for tree root systems are capable of supporting this kind of lateral force. Interlocking structural soil cells are a strongly recommended method of ensuring paved surfaces and their subgrades receive the foundational support they require to withstand the loads placed on them. These root cells have been designed to support enormous vertical and lateral loads while providing uncompacted soil for tree roots, allowing tree root systems to be brought closer to the pavement surface. Structural soil cell modules lock together to form a skeletal structure with excellent vertical and lateral modular strength. StrataCells support pavement loads by dispersing pressure throughout the matrix in the same way as engineered base course. An assembled StrataCell system has been FEA tested to 550 kPa vertical load. Engineers have even calculated that with only 300mm of granular pavement, the StrataCell matrix can support maximum vehicular traffic loads, while providing 94% void space for tree root growth or stormwater harvesting.
https://www.greenblue.com/gb/pavement-trees-the-space-between/
NOTICE IS HEREBY GIVEN that to enable KP Arborists to carry out tree felling works ("the Works'), the County Council of Cumbria has made an Order the effect of which is to prohibit any vehicle from proceeding along that section of the C402S Irton, Santon from its junction with the C4030, extending in a northerly direction for approx. 300m. A way for pedestrians and dismounted cyclists will be maintained at all times and a suitable alternative route for vehicles will be signed and available via the unrestricted sections of the C4025, C4026 Santon Bridge, C4023 and the A595 Holrook. Nothing in the Order to which this notice relates shall: 1. Apply to emergency service vehicles, or vehicles being used by statutory undertakers in the performance of their duties; and 2. Apply to anything done with the permission or at the direction of a police constable in uniform; and 3. Have the effect of preventing at any time access for pedestrians to any premises situated on or adjacent to the road, or to any other premises accessible for pedestrians from, and only from the road; and 4. Apply to vehicles being used in connection with the Works. Any queries to the Highways Hotline 0300 303 2992 or email [email protected], quoting the reference C02I -203. The Order will come into operation on 20 September 2021 and may continue in force for a maximum duration of eighteen months. However, please note that it is anticipated that the restriction will only be required for 5 days and only as and when the relevant traffic signs are displayed.
https://www.nwemail.co.uk/announcements/public_notices/notice/176927.THE_COUNTY_OF_CUMBRIA__C4025_IRTON__SANTON___TEMPORARY_TRAFFIC_REGULATION__ORDER_2021/
Calls for robust plan for Leamington Lift bridge The community has banded together to urge Scottish Canals to save one of the last remaining icons of Fountainbridge's industrial heritage. Deterioration has hit the Leamington Lift bridge, which crosses the Union Canal from Leamington Road to Gilmore Park, forcing canal bosses to close the bridge. Local groups, including the Fountainbridge Canalside Initiative (FCI), Re-Union Canal Boats, and Tollcross Community Council, have put pressure on the Scottish Government funded organisation to find a solution that will safeguard the future of the historic bridge. The temporary closure – expected to last until Christmas – means that no boats can pass under the bridge although access to pedestrians and cyclists has been maintained. Engineers who recently visited the site for a heritage assessment said there was significant corrosion within the lifting gear inside the towers. Scottish Canals reassured the community that they were looking at all options to enable the safe re-opening of the bridge, with an anticipated cost of £250,000. “I think the point that FCI would like to stress is that the deterioration of the lift-bridge illustrates a wider problem with the maintenance of Scotland’s canals,” said chairman Simon Braunholtz. “Volunteers locally have been trying to engage with Scottish Canals and with the Parliament to ensure a long-term commitment to our canal network – not simply for historic interest, but as an important asset. “We are concerned that insufficient attention has been given to this which is resulting in expensive and disruptive closure of sections of the canals.” Owners of The Counter coffee boat on the canal said the sporadic opening and closing of the footbridge was also affecting their business as it “significantly” diminished the amount of passing trade. Andrew Brough, chair of Tollcross Community Council said timing was also of the essence. “This vital and historic part of the canal, a local landmark is just too precious to let ongoing deterioration go un-repaired. We urge Scottish Canals to make good repairs now before it gets any worse.” Scottish Canals said that they are continuing to work with contractors to develop and cost concepts for potential solutions ranging from replacing the bridge deck with a lighter alternative to incorporating a new lifting bridge within the current structure. Local Green councillor and city canal champion Gavin Corbett said continued communication with the local community was vital. He said: “The Lift Bridge is one of the most distinctive features of the Union Canal so there is real passion for it to continue as a working bridge. At the same time, boaters are understandably frustrated that the canal basin is blocked off. “My own preference would be to see if we can upgrade the bridge for its 21st century use. “That would mean keeping the Victorian structure but replacing the 40 tonne deck to something much lighter given that it no longer needs to take the weight of heavy good vehicles. “That in turn would allow much lighter and more usable internal lifting equipment. “It’s essential that Scottish Canals has regular communication with affected canal users until a solution is agreed and a timescale outlined. I’ll be working closely with them on that.” The bridge was installed in 1893 to allow the flow of steam trains on top and the barges below but it became unpopular with pedestrians who had long waits while the bridge was raised and lowered. In 1907 the decorative lattice footbridge was added to allow people to cross when the lift was in operation.
https://www.edinburghnews.scotsman.com/news/traffic-and-travel/calls-robust-plan-leamington-lift-bridge-248168
1) Analyzes and calculates weight data of structural assemblies, components, and loads for purposes of weight, balance, loading, and operational functions of ships, aircraft, space vehicles, missiles, research instrumentation, and commercial and industrial products systems. 2) Studies weight factors involved in new designs or modifications, utilizing computer techniques for analysis simulation. 3) Analyzes data prepares reports of weight distribution estimates for use in design studies. 4) Confers with design engineering personnel in such departments as preliminary design, structures, aerodynamics, and sub-systems to make sure coordination of weight, balance, and load specifications with other phases of product development. 5) Weighs parts, assemblies, or completed product, estimates weight of parts from engineering drawings, and calculates weight distribution to define balance. 6) Prepares reports or graphic data for designers when weight balance requires engineering changes. 7) Prepares technical reports on mass moments of inertia, static and dynamic balance, dead weight distributions, cargo fuselage compartments, and fuel center of gravity travel. 8) May prepare cargo equipment loading sequences to maintain balance of aircraft or space vehicle within specified load limits. 9) May analyze various systems, structures, and support equipment designs to obtain information on most efficient compromise between weight, operations, and cost. 10) May conduct research analysis to develop new techniques for weights estimating criteria.. Report to [ Position, Name or None ] - Job Description Template - What is Required in a Job Description - Posting a Job Description - Characteristics of a Job Description Read more job description articles in our blog.
https://jobdescriptionsandduties.com/job-description/33/weight-engineer-job100231/
The highly anticipated Queensferry Crossing opened to traffic in the early hours on Wednesday, kicking off a week of celebrations for the new bridge. Its £1.35bn price tag means that the publicly-funded, 1.7 mile crossing, is the biggest infrastructure project in Scotland for a generation. The new bridge will be the longest cable-stayed bridge in the world, taking a large proportion of the traffic that currently uses the 53-year-old Forth Road Bridge, becoming the main through route between Edinburgh and Fife. Queenferry Crossing’s opening will come as a welcome addition for both locals and tourists alike, helping to ease pressure on the Forth Bridge, which has been dogged by maintenance problems for more than a decade. The Forth Bridge will remain though, serving as a crossing for pedestrians, cyclists and, eventually buses. This will leave the new bridge free to accommodate an estimated 24 million vehicles a year. Plans for the Construction of Queensferry Crossing were first announced in 2007 as a way of easing traffic and congestion in the area, with work beginning in 2011. The new bridge has benefited from key improvements in engineering, enabling it to be fitted with wind barriers which can withstand heavy gusts, allowing it to stay open in all weathers. Additionally, it has been fitted with around 1,000 sensors, which will give advanced warning of any problems, so that maintenance teams can pre-empt potential issues. A week of planned events will give locals the chance to walk across the bridge before it is officially opened to motorists on Thursday 7th September.
https://www.hotels-more.com/en/news/scotland-s-state-of-the-art-queensferry-crossing-is-the-longest-cable-stayed-bridge-in-the-world
This Locally Administered Federal Aid (LAFA) project addressed structural deterioration of the bridge carrying southbound traffic on JJ Audubon Parkway over Ellicott Creek in Amherst, New York. The bridge had not been rehabilitated since being constructed in 1982, and a yellow structural flag had been issued for deteriorating concrete and exposed steel reinforcement every year since 2012. Similar – but less severe – deterioration was also occurring on the adjacent northbound bridge, which was built at the same time. The final project revolved around a Complete Streets approach to accommodate vehicles and pedestrians because the project site was adjacent to an intersection that was between the University at Buffalo’s North Campus’s primary dormitory complex and its educational buildings. The approach entailed implementing a road diet throughout the project limits that reduced the number of through travel lanes to one each for northbound and southbound vehicles on JJ Audubon Parkway, in order to better suit the current and projected traffic levels. Rather than have two separate structures, the road was realigned to put both directions of traffic on a single bridge, allowing for the replacement of the northbound bridge superstructure with a prefabricated steel truss for pedestrians and bicyclists. The former northbound travel lanes were redeveloped with significantly less impervious pavement to accommodate the pedestrians and bicyclists. The project also improved safety at the intersection with Frontier Road, as the result of the construction of a modern roundabout to replace the signalized intersection. The project work included full-depth pavement reconstruction, milling and overlay, drainage modifications, and realignment of segments of the Ellicott Creek Trailway (a popular multiuse path). Erdman Anthony was responsible for project management, ROW incidentals – including producing acquisition maps – and all design phases. The project required significant coordination with NYSDOT and the University at Buffalo.
https://www.erdmananthony.com/Our-Projects/project/523
Download the abnormal loads chapter of the Yearbook of Road Transport Law Abnormal, or indivisible, loads may be carried on vehicles on the public roads provided the carriage is undertaken in line with a section 44 permit (section 44 of the Road Traffic Act 1988). These rules allow the carriage of ‘abnormal indivisible loads’ which exceed the weight and/or dimension limits contained in the Road Vehicles (Construction and Use) Regulations 1986, the Road Vehicles (Authorised Weight) Regulations 1998 and also a variety of unusual vehicles, such as items of engineering plant or military vehicles, whose design and function prevents compliance with construction and use regulations, to be used on public roads in certain circumstances. FTA members gain access to a broad range of additional information, advice and supportive tools all developed by the industry experts. Join FTA to benefit today! If you are an FTA member and have a query regarding this topic, or need more information, contact our Member Advice Centre @newsfromFTA FTA are delighted that @scotgov have listened to our concerns for industry, that any future migration policy having… https://t.co/5CYJipjlFT RT @agoodall4: @TaxNotes “Ratification of the [withdrawal agreement] is only the start of the process,” said Elizabeth de Jong, policy dire… ICYMI: We are delighted to share that we have chosen @macmillancancer Support as our charity for 2020! Stay tuned… https://t.co/39n1245rJu © Copyright 2008-2020 FTA Freight Transport Association Limited Registered Office - Hermes House, St John’s Road Tunbridge Wells Kent TN4 9UZ Registered in England Number 391957. New to this new-look site? You'll need to use the "Forgotten Password" link below to set a new password before logging in. Need Help?
https://fta.co.uk/compliance-and-advice/road/abnormal-loads/abnormal-loads
Transport for London (TfL) is helping to reduce disruption during the City of London's vital maintenance work on Tower Bridge. The three-month closure, which started yesterday (1 Oct), means vehicles are unable to use the bridge. TfL is advising road users of disruption to the surrounding area and has announced a ban of all non-emergency roadworks on surrounding roads to help reduce the impact. Additionally TfL has also put in place plans to swiftly remove any vehicles blocking key surrounding routes to help reduce disruption. During the closure of the bridge, drivers, cyclists and bus users will need to use an alternative route and should allow extra time for their journey. A signed diversion is in place taking drivers northbound over London Bridge and southbound over Southwark Bridge. As Tower Bridge is outside of the Congestion Charge zone, drivers will not be liable for the Congestion Charge if they do not deviate from the signed alternative routes. Three London Bus routes that use Tower Bridge (42, 78 and RV1) are affected. Signed alternative routes for cyclists are also in place and pedestrians will still be able to cross the bridge, although there will be three weekends when the bridge is closed entirely and pedestrians will be required to use alternative routes. To help support the City of London, TfL has carried out a range of activity to alert local residents, businesses and road users of this closure. More than 900,000 emails have been sent out to road, bus and other public transport users about the work, ensuring local people, businesses and those making deliveries in the surrounding areas are aware of the closure. Signage is in place on Tower Bridge and the approaching roads to warn of the closure. Leon Daniels, Managing Director of Surface Transport at TfL, said: 'The City of London's decision to close Tower Bridge for this work will avoid the risk of any further unplanned closures for emergency repairs. We've been working closely with the City of London to minimise the impact of this vital refurbishment and to ensure that Londoners have the travel advice they need. We have also banned local roadworks during the closure to reduce the impact further. 'We do understand concerns about this taking place at the same time as Network Rail's work on Tooley Street. However, our analysis has shown that combining the works will make only a small difference in disruption, whereas to do both separately would see road users face disruption continuously until late 2018. 'Our advice to those travelling in the area is to check before they travel and to plan an alternative route or allow more time for their journeys as roads in the area will be busier than usual. Our website shows all the closures and available diversions.' The 122-year-old Tower Bridge was last refurbished in the 1970s and the iconic structure now requires major maintenance, including re-decking of the lifting bascules, new expansion joints, waterproofing of the viaduct arches and resurfacing. Chris Hayward, Chairman of the Planning and Transport Committee at the City of London Corporation, said: 'This decision to close Tower Bridge to vehicles has not been taken lightly. This course of action has been taken after years of extensive consultation and planning in conjunction with numerous stakeholders. We will use this time to repair, refurbish, and upgrade London's most iconic bridge, which has gone without significant engineering works for more than thirty-five years. 'We are working hard to minimise disruption to both pedestrians and motor vehicles. The bridge will remain open to pedestrians for the entirety of the works, apart from three weekends. We recognise that these works may cause some frustration to residents and commuters, but these vital works really do need to take place.' Variable messaging signs are now in place, advising drivers of the closure, and TfL will be providing up-to-date information through the @tfltrafficnews Twitter feed and helping those who normally use the bridge plan their journey with the webpage tfl.gov.uk/tower-bridge-closure. Notes to Editors: - The pedestrian closures of the bridge will be between 08:00 - 22:00 on the weekends of 26-27 November, 3-4 December, 10-11 December - The Tower Bridge exhibition will remain open at all times - Tower Bridge opens for river traffic at 24 hours' notice around 1,000 times a year and this will be maintained - Images from within the bridge are available at request of the TfL Press Office About the City of London Corporation: The City of London Corporation provides local government and policing services for the financial and commercial heart of Britain, the 'Square Mile'. In addition, the City Corporation has three roles: - We support London's communities by working in partnership with neighbouring boroughs on economic regeneration, education and skills projects. In addition, the City of London Corporation's charity City Bridge Trust makes grants of around £20 million annually to charitable projects across London and we also support education with three independent schools, three City Academies, a primary school and the world-renowned Guildhall School of Music and Drama. - We also help look after key London's heritage and green spaces including Tower Bridge, Museum of London, Barbican Arts Centre, City gardens, Hampstead Heath, Epping Forest, Burnham Beeches, and important 'commons' in south London. - We also support and promote the 'City' as a world-leading financial and business hub, with outward and inward business delegations, high-profile civic events and research-driven policies all reflecting a long-term approach.
https://tfl.gov.uk/info-for/media/press-releases/2016/october/reminder-minimising-disruption-during-tower-bridge-closure
Bridge in Rainham to be temporarily closed to vehicles As the result of an inspection of expansion joints on a major road bridge in Rainham remedial work must be carried out. The Bridge over the railway track in Marsh Way, links the A13 with the A1306. The joints on the bridge were found to be defective by Havering Council's Highways inspection team and they decided that the joints needed to be replaced. Failure to urgently replace the joints could lead to further deterioration of the structure, and may increase the length of time that would be required to complete any repairs. In order to carry out this work safely and expediently, it will be necessary to temporarily close Marsh Way to traffic between its junctions with the A1306 (New Road) and the A13 for the duration of these essential works. During the work traffic will be diverted via either Ferry Lane/Lamson Road or Ripple Road. It is anticipated that the work will take 5 days to complete, and Marsh Way will therefore be closed to vehicles between 10am on Monday 16 January, and 4pm on Friday 20 January 2017. Pedestrians will be able to continue to cross the bridge during the works.
https://www.havering.gov.uk/news/article/86/bridge_in_rainham_to_be_temporarily_closed_to_vehicles
Dr Morgan Edwards (MBBS, BSc Hons) is a qualified Medical Doctor working as an Anaesthetic Trainee in Auckland, returning home recently after 6 years of training in Australia. Her experience in the Australian healthcare sector in both primary and tertiary medicine has enabled her to broaden her skill set to include all areas of medicine, from primary preventative health to trauma and surgery to Intensive Care and Anaesthetics. This journey has lead Morgan to become passionate about holistic healthcare, and her patients making informed decisions about their own health. Morgan advocates for her patients to seek as much information as they can about issues surrounding their health and their families’ health, and believes in her patients consulting a wide array of specialists – including Naturopaths, Medical Herbalists, and Medical Practitioners. Prior to studying Medicine, Morgan undertook a Bachelor of Science where she studied pharmacology, physiology and environmental science – giving her an understanding of the way in which all medicines (pharmaceutical and natural) interact with the body, and also a marked appreciation of the earth’s environment. Morgan is driven by her passion for organic living and in an attempt to minimise her ecological footprint, she and her husband are proud local consumers – sourcing as many of the products in their home from within 100km of where they live as possible, as well as cultivating their own fruit and vegetables. With an undying love for cooking (preferably with a glass of local, organic wine in hand) and travel, this down-to-earth doc, lives and breathes balance.
http://www.reneenaturally.com/blog/contributors/dr-morgan-kelly-edwards
As health-care delivery continues to evolve, moving away from the “paternalistic” model of medicine, the role of the advanced practice professional (APP) in caring for patients with hematologic malignancies is becoming more complex. We are in the era of shared decision-making (SDM), in which APPs act as patient advocates, patient educators, and liaisons between the health-care team and the patient (and his or her caregivers and support team). Although it is a relatively new concept, the collaborative, patient-centered SDM model has been widely adopted by clinicians and patients.1,2 The goal of this model is to make treatment decisions with the patient to achieve outcomes that matter most to the patient. Whether or not that goal is met can be determined by many factors, including the patients’ willingness to participate in treatment decisions and the medical institution’s approach to SDM. While SDM is now considered to be the dominant model of health-care delivery,2 its development is not yet complete. The Rise of the SDM Model In the 1960s, the prevailing model of health-care delivery was the paternalistic model – patients would defer to doctors regarding any treatment decisions. Later, health-care consumerism and health-care expenditures increased, and the patient-consumer started to say, “If I’m spending this much money for my health care, I should have a say in how I’m being treated.” That idea gave rise to the informed model in the 1970s. Though the physician was still making the treatment decisions, he or she spent more time educating the patient about the treatment. Adequately informed, the patient felt like part of the decision-making process. By the early 2000s, the health-care system changed to create “the perfect storm” in which SDM could take hold: the number of treatment options expanded, physicians found themselves having to explain what those options were, and patients were spending more on their health care. SDM also was bolstered by the implementation of the Affordable Care Act (ACA), which included a provision that encourages greater use of SDM “to facilitate collaborative processes between patients, caregivers or authorized representatives, and clinicians … and the incorporation of patient preferences and values into the medical plan.”3 The ACA’s SDM provision recognizes the value of patient decision aids and of involving informed patient preference when there is no clear clinical evidence to support one treatment option over another. The SDM Building Blocks The SDM model, published by Charles, et al in Social Science and Medicine in 1997, has four essential elements:4 - There are at least two participants: a health-care provider and a patient. - Both parties share information. - Both parties take steps to build consensus about the preferred treatment (medications, symptom management, side-effect management, etc.). - The health-care provider and the patient must reach a mutual agreement about which treatment to implement. The first three essential elements are easier to attain than the last, but that mutual agreement is central to the SDM philosophy. In fact, preliminary evidence has shown that when patients are actively involved in the decision-making process and are able to reach mutual agreement with their health-care providers, their treatment adherence, satisfaction levels, psychological well-being, and outcomes all improve.5 Unfortunately, the last piece of the SDM puzzle is often overlooked. APPs and physicians may make the mistake of assuming that the patient has automatically agreed to the treatment plan once he or she leaves the clinic. We need to be deliberate, though, in explicitly asking the patient whether he or she really understands and agrees with the treatment plan. SDM and the APP The number of treatment options available for our patients with hematologic malignancies may overwhelm them. Patients want to discuss those different options as they weigh what works best for them. That responsibility often lies with APPs. Hematology/oncology APPs participating in SDM continuously strive to reach patient-driven treatment decisions, helping patients navigate the complexities of cancer treatment decisions and their own personal values. These types of decisions require active participation from both parties and preservation of patient autonomy. In my experiences, that can often be as simple as allowing enough time for discussion and deliberation about the treatment choices. (Although, given time constraints, this is often easier said than done.) The APP also needs to set aside time to evaluate patients’ outcomes throughout the treatment course, asking, “Is the patient satisfied with the decisions we made? Are there any regrets?” Gathering information ahead of time to share in a multidisciplinary team meeting and providing the patient with patient decision aids or other educational materials can also be helpful. It is difficult to consistently achieve that fourth element of SDM – more so than many health-care providers might think. Agreement should not be assumed – it should be explicitly verbalized. Personally, I like to ask a simple, direct question like, “Could you tell me the treatment regimen that we have agreed on?” Or “What are the different chemotherapies that are part of the combination therapy we are going to initiate?” If the patient is able to verbalize and repeat the treatment regimen, I feel confident that he or she has a good understanding of the treatment and – the key to SDM – has agreed to the treatment plan. SDM Roadblocks Research from a number of government agencies, including the Agency for Healthcare Research and Quality (AHRQ), point to better short- and long-term outcomes when clinicians and patients engage in SDM.6 According to data from the AHRQ, SDM has multiple short-term benefits; chief among these is the increase in patients’ confidence in treatment decisions and trust in the health-care team. In addition, the patient empowerment inherent in the SDM process leads to a decrease in patient stress and anxiety related to cancer treatment decisions. In the long term, research also has shown that SDM leads to better treatment adherence, better quality of life, and longer-term remissions. (The AHRQ has also developed a five-step process, called the SHARE Approach, for SDM. See the SIDEBAR for more information.) So, given the apparent benefits, why hasn’t SDM been adopted by health-care organizations on a larger scale? Colleagues and I conducted a systematic review of 30 inpatient and outpatient oncology settings to help answer this question.7 Our interviews with APPs revealed several barriers to participation in SDM, which we categorized into seven main themes: - Practice barriers: There is no standard, uniform approach to SDM, and the model varies within each institution. - Patient barriers: Patients may not be emotionally or mentally ready to participate in treatment decision-making. - Institutional policy barriers: Institutions may have enacted policies requiring physician supervision, as opposed to collaboration between the physician and the APP. Having undefined roles for APPs could also result in a lack of direction. - Professional barriers: APPs may lack the professional training and experience to fully participate in SDM, and the professional culture they practice in may be non-conducive to participation. - Scope of practice barriers: Regulations by state or federal laws may prohibit APPs from initiating new cancer therapy or practicing independently in cancer SDM. - Insurance coverage as a barrier: When insurance payment for service is low, the APP is required to see more patients; time constraints and increased patient volume can limit SDM participation. - Administration as a barrier: Full participation in SDM requires time, training, and resources that administration may not provide. On the other hand, we found several promoters of SDM among the APPs we interviewed: - Multidisciplinary team approach: APPs have increased participation in SDM when there is a consistent multidisciplinary or team approach in the practice. - APPs having a voice during cancer SDM: When APPs perceive that their input is valued, they feel they are more likely to participate in cancer SDM. - Increased knowledge level: APPs feel they can better participate in the SDM process when they know more about the disease and its treatment. - Personal values: APPs who personally value participation in cancer SDM are more actively involved in the treatment decision-making process. Embracing SDM In our review of the centers’ relationships with SDM, a central theme emerged: To be truly effective, SDM has to be implemented and supported from the top down. It may require a culture change. It is not just the physician or the APP who needs to embrace SDM, but everybody in the health-care team – from administrators to practitioners. Are we there yet? Not quite, but we are on the right track to continue our efforts to implement SDM. From what I’ve observed in practice, health-care organizations are participating in SDM to an extent, but it is not explicitly part of the organizations’ policies, culture, or – perhaps, most importantly – budget. Many organizations are advocating for SDM, including the AHRQ, the Institute of Medicine, and the Department of Health and Human Services. We still have a way to go, though. Hopefully, more education, training, and research will change the naysayers’ opinions. References - Legare F, Ratte S, Gravel K, Graham ID. Barriers and facilitators to implementing shared decision-making in clinical practice: update of a systematic review of health professionals’ perceptions. Patient Educ Couns. 2008;73:526-35. - Kane HL, Halperin MT, Squiers LB, et al. Implementing and evaluating shared decision making in oncology practice. CA Cancer J Clin. 2014;64:377-88. Affordable Care Act, Section 5306. - Charles C, Gafni A, Whelan T. Shared decision-making in the medical encounter: what does it mean? (or it takes at least two to tango). Soc Sci Med. 1997;44:681-92. - Sandman L, Granger BB, Ekman I, Munthe C. Adherence, shared decision-making and patient autonomy. 2012;15:115-27. - Agency for Healthcare Research and Quality. “The SHARE Approach.” Accessed June 7, 2016, from http://www.ahrq.gov/professionals/education/curriculum-tools/shareddecisionmaking/index.html. - McCarter SP, Tariman JD, Spawn N, et al. Barriers and promoters to participation in the era of shared treatment decision-making. West J Nurs Res. 2016 May 18. [Epub ahead of print] The Agency for Healthcare Research and Quality’s SHARE Approach is a five-step process for shared decision-making that includes exploring and comparing the benefits, harms, and risks of each option through meaningful dialogue about what matters most to the patient. Step 1 Seek your patient’s participation. Step 2 Help your patient explore and compare treatment options. Step 3 Assess your patient’s values and preferences. Step 4 Reach a decision with your patient. Step 5 Evaluate your patient’s decision. Source: Agency for Healthcare Research and Quality. “The SHARE Approach.” Accessed June 7, 2016, from www.ahrq.gov/professionals/education/curriculum-tools/shareddecisionmaking/index.html. Free CME Summit on Myelodysplastic Syndromes Improving MDS Outcomes from Diagnosis to Treatment: A Multidisciplinary Approach There are many complexities associated with myelodysplastic syndromes (MDS) that a multispecialty team must address as a clinical-care unit. These complexities include obtaining adequate bone marrow specimens, identification and classification of MDS, the comorbidity rate of MDS patients, and the decision-making process for treatment. To address educational gaps associated with these complexities, the American Society for Clinical Pathology (ASCP), the American Society of Hematology (ASH), and the France Foundation have designed comprehensive MDS-directed educational summits that feature live events designed by world-class subject matter experts. Participants in these summits will actively engage in multidisciplinary, interactive small-group activities including case-based tumor board discussions and four break-out sessions on: - The role of molecular testing - Distinguishing morphologic mimics from MDS - Assessing low or high grade MDS - Applying new prognostic scoring to cytogenetics Who should attend? - Hematologists - Medical oncologists - Hematopathologists - General pathologists - Pathologists’ assistants - Physician assistants - Hematology and oncology nurse practitioners and physicians assistants - Medical laboratory scientists Dates and Locations Registration is free. Go to either hematology.org/Meetings/4127.aspx or pathologylearning.org/mds/summits to register.
https://www.ashclinicalnews.org/perspectives/advanced-practice-voices/welcome-to-the-era-of-shared-decision-making/
Advocates for shared decision making say that patients have neither the information nor the opportunities necessary to participate fully in clinical decisions. The survey of 3,000 U.S. adults focused on nine common medical decisions and found that patients don't have sufficient information to make the best healthcare choices. For example, only 20% of patients considering breast cancer screening and 49% of patients considering blood pressure medications reported receiving information about drawbacks. "Decisions are pretty much made top-down with physicians making recommendations," says Lyn Paget, director of policy and outreach at the Foundation for Informed Medical Decision Making (FIMDM) in Boston. "They're just telling us to do it without providing a balanced representation of risk. Therefore, our knowledge when we make these decisions is quite poor." ONE STEP TO ACCOUNTABLE CARE With leaders increasingly focused on patient-centered care, shared decision making is gaining traction. Healthcare reform not only encourages patient-centered care through accountable care organizations (ACOs), it also specifically calls for the creation of Shared Decision Making Resource Centers and standards for decision aids. The Department of Health and Human Services will provide grants to fund the centers and will coordinate efforts with the Agency for Healthcare Research and Quality. In addition, shared decision making falls in line with consumer-directed healthcare and value-based purchasing that plan sponsors are exploring to reduce costs. "With shared decision making, you have an educated and informed consumer who is making choices and also has some financial stake in what's going on," says Helen Darling, president of the National Business Group on Health. "But the key is to understand that the individual patient, and not the doctor, is the capstone of the healthcare team." Among the rationale for shared decision making is that patients are more likely to choose lower risk interventions and that, in turn, will lead to lower use of unwarranted services. "If a health plan is interested in making sure patients are not overtreated or undertreated, the solution is to provide patient with full information when they are in the window of time for making medical decisions," says Paget. There are several studies showing the impact shared decision making can indeed have on patient care, outcomes and costs. A 2010 study of 174,120 patients conducted by Health Dialog Services found that patients using telephone-based support, including shared decision making, averaged monthly medical and pharmacy costs per person 3.6% lower than patients without that level of support. The patients with enhanced support also saw a 10.1% reduction in hospital admissions. A 2009 review of 55 randomized controlled trials conducted by the Cochrane Collaboration found patients exposed to decision making aids are more likely to forgo elective invasive surgery in favor of more conservative treatment options, less likely to use menopausal hormones and more likely to forgo prostate-specific antigen (PSA) screening. "The healthcare system is not ready for large-scale implementation but the soil is getting very fertile," says David Wennberg, chief science and products officer with Health Dialog in Boston.
https://www.managedhealthcareexecutive.com/view/shared-decision-making-has-value-providers-need-more-support
At Anjuna Medicine, we believe in the paradigm shift our world is currently experiencing surrounding its healthcare options. Our goal at Anjuna is to make healthcare easy, and that includes making sure our patients are equipped with knowledge, choices of treatments, and a sense of empowerment for their well-being. In other words, we support healthcare that values help vs. treatment. To better understand this idea of a paradigm shift from mere treatment to a true, helping healthcare, we must first understand the basic idea of our current healthcare paradigm. Paradigm- A framework containing the basic assumptions, way of thinking, and methodology that are commonly accepted by members of a discipline or group. Paradigms are, by their nature, persistent and hard to change. But, change they must if we are to live a lifestyle aimed at preventing problems by optimizing health and embracing a state of wellness. Albert Einstein put it this way, “The significant problems we face cannot be solved at the same level of thinking we were at when we created them.” Healthcare, and those who heal, has been around for as long as man has been in existence. Shamans, healers, sorcerers, witch doctors, priests, medicine men, physicians, alchemist, doctors, are all names for healers throughout the ages. All healers treat illness, but what if healers could prevent an illness or alter a lifestyle? Therein lies the problem with paradigms. Like tradition, they die hard. Flawed paradigms persist for centuries until a better one is discovered. As we move forward to a new era we must discover a new paradigm. Treatment vs. Help We need to shift focus from treating chronic disease to helping patients understand how they can help prevent chronic disease. When 7 out 10 United States deaths are the result of chronic disease, it is imperative we look to other methods of healthcare when treating a disease caused by a breakdown inside the body. A disease is always secondary; it has a cause and that cause is what needs to be treated. The fundamental goal of health care is to increase longevity and optimize physical, psychological, and social well-being at the individual, community and society levels. Our current healthcare is not healthcare; it is disease care where a doctor’s visit is 5 minutes long and concludes with a prescription for medication. This is one of the key flaws in Western Medicine. Prevention, instead of prescription, is the key to a paradigm shift, and this entails wellness education. Even the CDC understood this impending shift when, in 1992, it changed its name from the Center for Disease Control to the Center for Disease Control and Prevention. Holistic Healthcare Education in wellness focuses on the physical, emotional, intellectual, occupational, social and spiritual dimensions of life. It is understanding that an individual is a whole, made up of interdependent parts, and when one part is not working at its best, it impacts all of the other parts. This is the holistic approach to healthcare, and we can be an active participant in our healthcare by making changes in lifestyle to promote good health. We can choose from a plethora of integrated health care professionals such as psychologists, acupuncturists, chiropractors, massage therapists, dieticians, or naturopaths, all who come from a diverse background and have their own unique modalities. Complementary approaches can also be included such as meditation, Qi Gong, guided imagery, homeopathy, energy therapies, traditional healers such as Ayurvedic medicine and Chinese medicine. Calculating your Health If you want to calculate your own health, take these figures into consideration: - Quality of medical care = 10% of overall health - Heredity factors = 18% of overall health - Environmental factors = 19% of overall health - Everyday lifestyle choices = 53% of overall health The decisions people make about their life and habits are, by far, the largest factor in determining their state of wellness. These seemingly insignificant choices we make on a daily basis, like whether to have a salad or burger; taking the stairs vs. the elevator; or making sure we laugh and smile more than we frown, can truly determine the quality of our life. We must take responsibility for our own health by being conscious of the food we eat, the thoughts we think, the situations in which we place ourselves, our stress level, environment, sleep patterns, and social/spiritual input. If we can shift our perspective and expand our consciousness to a new way of thinking about the body as an entire system and addressing its needs as a whole, then healthcare’s goal becomes one of wellness, not treatment.
https://anjunamedicine.com/paradigm-shift-a-new-way-of-looking-at-healthcare/
As healthcare providers search for ways to set their office apart from others and better serve their patients, one concept that often comes up is patient engagement. Simply put, this entails encouraging patients to get more involved in their own healthcare over their lifetime. So, why is patient engagement important, and how can you try it out at your own medical practice? Here’s what you need to know before you start making patient engagement a priority. Understanding Patient Engagement in Healthcare The point of patient engagement is to motivate patients to participate more in their own healthcare, working with their doctor to improve their health over time. This means pushing them to be more informed about their diagnoses, treatment options, and steps they can take to stay healthy as they get older. Patient engagement can be considered a partnership between patients and their doctors, and it’s become a more common approach in recent years than ever before. After all, according to the World Health Organization (WHO), engaged patients are better at making informed healthcare decisions. This can lead to better health outcomes for them while driving down healthcare costs overall. As a result, patient engagement can benefit patients, providers, and the public as a whole. That’s why it’s become an increasingly common goal for providers everywhere, going from an extra perk to a necessity for quality healthcare. So if you’re not already focusing on patient engagement in your office, it’s time to find out how to get started and what benefits to expect from it. What Are the Benefits of Patient Engagement in Healthcare? Patient engagement has advantages for both providers and patients. One primary example of this is that patients who are more involved in their healthcare than average tend to have better health outcomes. In fact, one study concluded that people who aren’t engaged in their healthcare are twice as likely to delay needed medical care than engaged patients, and they’re three times as likely to suffer from unmet medical needs. Other studies have drawn similar conclusions, reporting that when patients are heavily involved in their healthcare decisions, they have fewer visits to the ER or hospital and a lower rate of surgery. This may be because engaged patients, who are educated about health risks that they face, can take preventive measures to avoid developing or worsening certain health conditions. And when they’re aware of symptoms and risk factors to look out for—thanks to their doctor engaging and educating them—they’re likely to get medical treatment sooner than average so they can expect better health outcomes. Of course, these perks of patient engagement in healthcare don’t just benefit patients. They also have a significant effect on providers. To start, patients who have better health outcomes tend to be more satisfied with their care, which can increase patient retention for doctor offices. Being able to keep the same patients for years means your office is more efficient, as you can spend less time marketing and more time providing care to the patients you already have. Plus, the ability to craft a patient-provider relationship that’s better than average makes it likely that your office will get good reviews from current patients, which can attract more patients if you have the availability to take on new ones. And when patients are more involved in their healthcare decisions, they’re more likely than average to make it to their appointments, reducing the rate of no-shows for your office. Clearly, encouraging providers to collaborate and engage with patients can lead to better healthcare outcomes, as well as more efficiency and profitability for the office. Patient Engagement Strategies to Use Now that you know the importance of patient engagement, it’s time to determine the best ways to try it out at your own office. Patient engagement strategies vary quite a bit, but the general rule of thumb is to do everything you can to connect your providers to your patients so they can easily collaborate. This encourages patients to stay updated and involved in their healthcare, from wellness checks to long-term treatment plans. One example of a patient engagement strategy is the use of an online health care record that’s easy to share between providers and patients. While providers are familiar with the EHR, studies show that many patients still aren’t. In fact, one study reported that as little as 10% of patients with access to their EHR actually looked at it. Considering the wealth of information that the EHR contains—from medical history and x-ray images to immunization dates and lab results—it’s crucial for doctors to persuade their patients to view their electronic health record data in a patient portal regularly. Doing so will keep them informed on their treatment plans and will also streamline the workflow for providers since patients can update information as needed. For instance, they can add notes about allergies and reactions to medications, saving time and improving efficiency in the provider/patient relationship. Another patient engagement strategy is to encourage them to join wellness programs in which they’re rewarded for taking steps toward good health. For instance, patients might use wearable fitness trackers to record their steps or hours of exercise every week, which providers can track and reward over time. They can even create competitions out of it, pushing patients to do better when it comes to their health and fitness whenever possible. Offering health courses and even simply communicating with patients regularly on social media pages are other ways to improve patient engagement. Get Help with Patient Engagement As you can see, there are a variety of patient engagement strategies to try and several reasons to do so. If you need help getting started, contact TempDev here or at 888.TEMP.DEV today. Our team of EHR and practice management experts can advise you on ways to get patients more involved in their healthcare options. From helping set up patient portals for a more engaged patient community or assistance with EHR reports, we’re happy to provide the guidance you need. Contact us today to schedule a consultation! Interested? Agree with our point of view? Become our client! Please submit your business information and a TempDev representative will follow up with you within 24 hours.
https://www.tempdev.com/blog/2021/10/02/what-is-patient-engagement/
The ability to share patient data across disparate networks is the key to raising the standards on patient care delivery. To understand the concept of integrated patient care, picture a utopian healthcare system that co-locates all of a patient’s healthcare providers in one building. The primary care physician (PCP) examines the patient and sends her down the hall to have blood drawn and tested. The patient is informed of the results and then sent to a specialist further down the hall for evaluation. Soon after, the provider team confers to determine a treatment plan and subsequently creates and schedules appointments based on that plan. Throughout the process, the team collaboratively tracks the patient’s progress on a commonly shared Electronic Health Record (EHR) system. The integrated, interoperable data provided by the system enables the patient’s provider to act quickly and provide consistent care with limited gaps in coverage. In reality, a typical patient’s care providers are geographically distributed and employed by different organizations that use separate EHR systems. Therefore, a fully integrated and comprehensive health record does not exist for a typical patient, since many providers lack simple means to electronically share data. The lack of efficient data sharing leads to a number of suboptimal effects such as delays in diagnosis, gaps in care, increases in co-morbidity rates, increased costs to providers, and other problems that work against the patient’s best interests. With no means to share complete patient data, providers simply lack the capability to build an optimal healthcare plan for the patient. Practitioners cannot be expected to create the best patient care plans when they lack comprehensive access to their patients’ healthcare information from across the spectrum of care. To remedy this problem, integrated healthcare systems provide clinicians with the ability to collaboratively build and evolve longitudinal records, which are essentially a timeline of patient data for observations, labs, and vitals. Ultimately, the goal of interoperability is to produce a longitudinal record for every patient and make that data accessible to clinicians at the time of care. Furthermore, interoperable data yields both individual and population-level benefits, as integrated and interoperable Integrated healthcare systems offer a wealth of benefits, both on the patient and provider sides of the healthcare quality equation. Some of the key benefits to patients and providers include: Patient Access on Demand: First and foremost among these benefits is the ability for patients to access their complete healthcare information on demand. Many EHRs give the patient control over who can access their data and under what circumstances. Giving patients the right tools to actually engage and participate in their own care process, whether it is through a mobile application or a member portal website, helps them to improve the overall quality of their own healthcare and the healthcare of others for whom they are caring. In some cases, a person may be caring for a dependent child, an elderly parent, or another form of medical proxy. Point-of-Care tools can therefore enable an individual to ensure that they (or those they are caring for) receive the best care possible. Provider Access and Collaboration: Integrated health records enable providers to update the most recent patient data in real-time and enable virtual collaboration of the patient’s entire-provider team. Although it’s unlikely that a patient’s provider team will be co-located, integrated systems help to simulate this experience by filtering a patient’s data to the same system, thereby building a more comprehensive record that can be viewed by all team members at the point of care. In this way, clinicians are able to view a more complete health record and gain a more holistic understanding of the patient’s health. This also serves to improve capabilities for holistic clinical decision-making and helps to avoid duplicative or unnecessary lab work and testing. Holistic Treatment Plans: Integrated systems encourage and enable integrated and holistic treatment plans. When virtual collaboration is possible within a patient’s circle of care, it enables better and more holistic information with which to make patient care decisions. This includes information that provides a more in-depth, 360-degree view of the patient, including non-medical social determinants that affect the patient’s health. For example, issues such as unemployment, poverty, divorce, and lack of healthcare access (to name a few examples) can have a significant impact on a patient’s overall health. The holistic treatment approach goes beyond just treating symptoms and instead enables providers to understand the root of the problem, thus creating better and more manageable solutions for the patient. Integrating health records benefits both patients and providers by enabling the provider to have a holistic view of the patient and his or her medical history by providing patients with access to their own medical records and empowering them to actively participate in their own treatment plans. Doctors often look for simple solutions to try to explain and solve their patients’ problems. Appropriately, an EHR is just that – a relatively simple solution with a multitude of benefits for patients and providers. By pulling together integrated and interoperable data, they offer more accurate, faster, and more complete insights for individuals and populations – truly the future of healthcare. Learn how Ready Computing can help. About the Author Michael LaRocca is the CEO of the New York-based health technology firm Ready Computing, which offers innovative services that improve patient care. Mike’ expertise includes a thorough understanding of the standards, protocols, technologies, and architectures required to successfully integrate healthcare data. He is passionate about the holistic patient treatment plans, and is very active in consortiums and working groups focused on healthcare data integration.
https://readycomputing.com/what-is-the-progressive-web-app-pwa-and-how-it-works/
With this wealth of knowledge and experience in various fields of science and medical systems, she recognized that her patients needed more education about their own health and the various medical systems that are available in the world. She also recognized the beneficial effects of combining or integrating various therapeutic modalities. Most people need the support of many different types of therapy derived from various medical systems, not just one. Having in depth understanding of how the body and brain (mind and emotions) work together, has given Dr. Riccio the foundation she needs to recognize that an individuals pattern of healing, which is synonymous with a disease process is what directs helping a person move through their illness or pain. Dr. Riccio actively integrates various medical modalities in her practice. She strongly believes that providing a patient with various treatment options empowers the patient to discuss and make knowledgeable decisions about their own health. She has a private practice in NYC and Saratoga Springs, NY. She specializes in family wellness with an integrative and holistic approach. Dr. Riccio uses both Pulsed Electromagnetic Fields and Classical Homeopathy for clients with injury, pain, or any type of mental, emotional or physical imbalance. Using homeopathy, she has successfully treated clients with ADD, allergies, asthma, autism, GI disturbances, urinary tract infections, migraines, skin ailments, emotional problems, hormonal imbalances, psychological issues (anxiety, depression, OCD), pain, and acute illnesses (viral and bacterial infections). PEMF has recently become an integrative modality of her practice to help repair musculo-skeletal injury and relieve pain. Dr. Riccio welcomes anyone to her practice and encourages parents to participate in the healing and wellness of their children. Treating the entire family truly helps to paint the clearest and most descriptive picture of the health imbalances and healing patterns of an individual. Dr. Riccio also integrates nutritional/diet counseling, stress reduction guidance, and life style management. Integrating essential elements into your cells necessary to detoxify, rebuild, and restore. Dr. Alexandra Riccio received her Ph.D. in Physiology and Neurobiology from the University of Connecticut in 1999 with specialization in biological rhythms and reproductive biology. Shortly after, she began studying Homeopathic Medicine at the New England School of Homeopathy in Amherst, MA with Drs. Paul Herscu, ND and Amy Rothenberg, ND. To support a practice in Homeopathy, she went on to study conventional western medicine at Stonybrook University, NY, where she received both a B.S. in Nursing and then a Master's degree as a Family Nurse Practitioner (FNP-C) with national certification.
http://centerforbioenergeticintegration.com/bio.html
Decalogue for Doctors 1. When the patient enters your office – stand up, shake his hand, introduce yourself and follow the same at the bedside. Show respect for the patient – the patient is often older than you and has other, sometimes richer, life and professional experience. The fact of the disease does not entitle you to look down on them – they just have an unhealthy liver or bone marrow, but did not cease being a human being and, therefore, deserve compassion (which established medicine). Show your sympathy and respect due to every newly met person. Remember that you owe your medical knowledge to your teachers and studies but also patients from whom you learn the symptomatology of diseases and reactions to drugs (most of which you have never used). 2. Treat the patient politely, like a guest at your home. Ask about a job, career achievements and family. Ask what you can do to help and then listen patiently and explain doubts. 3. Do not show impatience, haste or nervousness. Show the patient your faith in therapeutic success, mobilise the patient to actively participate in the fight against disease under your guidance, convince the patient that you will not leave them alone, offer – figuratively and in reality – your support in the time of health crisis. The patient has to follow you in faith and conviction in your skills, competence, kindness and selflessness. 4. Make the patient feel the most important to you at that moment, convince them that you are interested in them and that their illness is a challenge for you and a mystery to solve. 5. Show respect for the patient, lean over them and show your understanding of their problems and concerns. Let your patient be your partner – it will help you to gain their trust and cooperation in the healing process and will help you to convince him about the validity of your medical actions. 6. Remember that at the time of health crisis each person feels fear, uncertainty about the future, and expects the worst. Your patient is in a new situation, afraid of diagnostic procedures and expects your interest, warmth, as well as composure, concentration and confidence in making decisions. Also remember that any decision must be approved by the patient who must be convinced about its validity. 7. When entering a sick room, coming to the patient – leave your home and professional troubles behind as well as your own health problems – they should not affect your behaviour, speed and accuracy of making decisions. Your gloomy face, bad mood, the lack of smile can be read incorrectly by the patient as a lack of hope and adversely affect the well-being and condition of the patient. Be responsible also for the atmosphere in your medical team, kindness and mutual respect are needed by the healthy and the sick. 8. Remember that “the doctor should like their patients and feel responsible for them” (Antoni Kępiński). Treat your patients as you would like your loved ones to be treated in illness. 9. “Do not take away the hope of thy neighbour” (Julian Aleksandrowicz). “Not bringing a man hope is worse than making him blind or killing him” (Marek Hłasko). Bring hope and create a chance to make it real through the improvement of treatment conditions and a holistic approach to the patient in combination with his surrounding, profession, personal habits and interpersonal relations. Take into account the patient’s psychosomatic unity and unique individuality. “Nothing that can affect the health of my patient will be indifferent to me” (Hippocrates).
http://szpiczak.org/decalogue-for-doctors/
Jammu: Lieutenant Governor Manoj Sinha on Wednesday said the healthcare services in the union territory are witnessing a “revolutionary transformation” with his administration committed to ensuring quality and affordable services to the people. With an aim of promoting good health and expanding the outreach of comprehensive primary healthcare services to the people of Jammu and Kashmir, Sinha e-inaugurated 73 AYUSH (Ayurveda, Yoga and Naturopathy, Unani, Siddha and Homoeopathy) Health and Wellness Centres under the Ayushman Bharat scheme across the union territory. “The healthcare services in Jammu and Kashmir witnessed a revolutionary transformation in the past several months. Unprecedented work is being done for advancement and upgradation of medical facilities for ensuring quality and affordable healthcare services in the UT,” he said. The LG remarked that the government was making committed efforts to fully integrate AYUSH with the healthcare delivery system, besides promoting good health through preventive, rehabilitative, mitigative and curative interventions of the Indian Systems of Medicine and Homoeopathy (ISM&H). Speaking on the significance of the AYUSH system of treatment and medicine, he said this system of medicine focuses on the overall wellness of a person. “It not only treats a patient but also guides people for adopting a healthy lifestyle along with teaching yoga and other natural, healthy practices,” he said. Sinha observed that AYUSH Health and Wellness Centres would be a “game-changer” in Jammu and Kashmir, especially in promoting the AYUSH sector, so that a comprehensive primary healthcare through AYUSH principles and practices is provided to the community for achieving the basic objective of holistic wellness model by advocating self-care and home remedies amongst the community. He laid special emphasis on creating awareness and sensitising all stakeholders and health service providers about the strengths of AYUSH systems for optimum utilisation of its potential, and revival of the traditional systems of medicine. Sinha also expressed gratitude towards Prime Minister Narendra Modi for bringing reforms in the healthcare sector by establishing two All India Institute of Medical Sciences (AIIMS) in Jammu and Kashmir, besides sanctioning several medical colleges and hospitals for overall revamping of the healthcare sector here. The LG interacted also interacted with the staff of various Health and Wellness centres and enquired about their functioning, daily and monthly patient in-take and free of cost healthcare services being provided to patients in these centres. Sinha also launched “Arogya Siddhi”, a compendium of clinical outcomes of AYUSH interventions across the union territory. A video presentation giving detailed information about the AYUSH Health & Wellness Centres, and the facilities being provided to people in these centres, particularly in far-flung areas, was also displayed. Some of the features of AYUSH Health & Wellness Centres include medicinal plants garden, yoga space, providing training and suggestions for home treatment, promoting ‘Dincharya’ (art of healthy Living), ‘Ritucharya’ (wellness calendar) and a personalized healthcare approach. Pertinently, out of 73 AYUSH Health & Wellness Centres, nine are in Udhampur; seven in Rajouri; six in Reasi; five in Baramulla; four in Budgam, Bandipora, Doda and Ramban; three in Jammu, Srinagar , Kathua, Kupwara, Poonch , Anantnag and Pulwama; Two in Kulgam, Ganderbal, Shopian, Samba and one in Kishtwar. Out of the 571 approved AYUSH Health & Wellness Centres, 94 have been completed in the First phase and 100 others will be completed in the second, it was informed.
https://thekashmirimages.com/2021/03/25/healthcare-services-witnessing-revolutionary-transformation-in-jk-lg/
Derrick C. Whiting, DO is board certified in family medicine and osteopathic manipulative medicine by the American Osteopathic Board of Family Physicians. Dr. Whiting earned a Bachelor of Science from Metropolitan State University of Denver in Denver, Colorado. Furthering his education, Dr. Whiting earned his medical degree from the Edward Via College of Osteopathic Medicine in Blacksburg, Virginia. He then completed his Family Medicine residency at Reid Health in Richmond, Indiana. Dr. Whiting joined TPMG Tidewater Family Medicine in 2022. Dr. Whiting was drawn to the medical field by his passion for learning who, what, how, and why we are as human beings. As he came to understand more about humanity, he gained greater respect, empathy, compassion, and love for people, which invoked a desire to help others as a physician. Dr. Whiting strives to build strong relationships with his patients and colleagues, creating bonds established in mutual trust and respect, as he works with patients and providers to promote healthy lifestyles within the Hampton Roads community. As an osteopathic family physician, he believes in providing holistic medical care, considering the whole person’s mental, physical, and spiritual needs when developing a healthcare strategy. Dr. Whiting is committed to empowering his patients to make informed decisions about their healthcare, offering up-to-date and evidence-based treatment options that address the root causes of their health concerns. Dr. Whiting provides comprehensive family medicine care to all ages, addressing wellness visits, urgent care issues, minor surgical procedures, and chronic conditions. He also has a particular medical interest in Osteopathic Manipulative Therapy (OMT), which is a hands-on approach to restoring the body to a more balanced state. OMT can be used to treat a variety of conditions including muscle/joint pain, low back pain, neck pain, headaches/migraines, pelvic pain, sciatica, and more. Dr. Whiting is also interested in treating athletic injuries and preventative medicine by way of health maintenance, fitness, and nutrition. He is a member of multiple professional organizations, including the American Osteopathic Association, the American Medical Association, and the American College of Osteopathic Family Physicians. Though Dr. Whiting is initially from central Virginia, Hampton Roads became home for him and his wife during their service in the US Air Force at Langley Air Force Base. He started a family in this community and built great relationships with amazing people in the area. Dr. Whiting enjoys spending time with his wife and three children, spending many weekends at their sporting and scholastic events and watching them grow into amazing human beings. He loves spending time outdoors, fishing, hunting, golfing, snowboarding, hiking, and participating in other sports and activities. Dr. Whiting warmly welcomes patients to TPMG Tidewater Family Medicine.
https://www.mytpmg.com/physician/derrick-whiting-do/
Nutrition, defined by the World Health Organization as “the intake of food, considered in relation to the body’s dietary needs” 1 has been the object of numerous studies correlating food consumption with the development or prevention of various chronic and non-communicable diseases. These include the relationship between high meat consumption and the increased risk of colon cancer 2,3 ; increased fiber consumption and the reduced risk of cardiovascular disease 4 ; increased DHA intake (polyunsaturated fat) and a reduction in the risk of glucose intolerance, decrease in tissue inflammation and promotion of memory improvement 5,6 ; and the consumption of probiotics, which helps balance the intestinal flora, and has been known to benefit some autistic children 7 . According to a review published by Myles in 2014, scientific studies have emphasized the role of diet in the immune system. That author suggests that high intakes of sodium, refined sugar and omega 6 type fatty acids, and low consumption of omega 3, associated with Western dietary patterns, can damage the immune system, compromising the health of individuals 8 . Despite the fact that nutrition is one of the most significant aspects of good health and wellbeing, playing a part in the prevention of many diseases, as well as being one of the main factors in reducing premature death and disability 9 , most medical school curricula still do not offer in depth coverage of the subject, devoting only twenty to thirty hours of study to it, on average, throughout the whole six years of graduation. Furthermore, the course content tends to be presented mainly in the first semester, with no little or no practice to support the theory 10,11,12 . The insufficient representation of the subject in the curricula can result in poor training of medical professionals, who often lack consistent knowledge of this subject, which is so important in today’s global epidemiological scenario, with the increased prevalence of chronic diseases, where a healthy diet is essential not only to for prevention but also to ensure successful treatment of illnesses 13 . Some deterrents to introducing the subject of Nutrition in the medical curriculum are the common belief among health professionals that nutritional guidance should be the role of dietitians rather than medical practitioners, and the claim that there is not enough scientific basis for the treatment/prevention of diseases through nutrition. There is also a prevailing welfare and medical paradigm that disregards the disease prevention approach; and finally, with the indecision over which are the most relevant topics to be covered in the medical curricula, limitations on teaching time, and stretched financial resources, teaching on nutrition is often low on the list of priorities 14,11 . The poor eating habits often practiced by students and doctors themselves are, in many cases, contradictory to the guidelines of the WHO and the Brazilian Ministry of Health 15,16 . A 2010 study among Greek medical students found that 36.9% of males consumed fast food more than three times a week, in spite of 82.4% of them knowing the implications of the long-term practice of poor eating habits 17 . Other studies have shown similar results within the medical student population. A research study conducted in Lithuania pointed to the fact that eating habits were irregular among first and third year students, and that only 20% of the study population consumed the World Health Organization’s recommended daily intake for fruits and vegetables 18 . In view of this issue, the current study interviews medical students in their first to sixth years of university, seeking to understand what they consider a healthy diet to consist of, and whether they consider themselves capable of guiding future patients in practicing healthy eating habits. METHODS The data for this qualitative study was collected through semi-structured interviews with medical students at a public university in the state of São Paulo, Brazil, who were taking part as volunteers in another study on meditation (Fapesp – Process 2015/10854-2). We interviewed students in their first to sixth years. The group was randomly selected. Out of every three students who came to collect data for the meditation study, one underwent an interview with the researcher, in which two initial key questions were asked: 1) What do you consider a healthy diet?; and 2) How would you help your patients to change harmful eating habits? The whole process took place at the university research unit facilities. The interviews were recorded with a digital audio recorder and were conducted in a way that allowed the interviewee to bring out their own ideas, without interruptions, ensuring that that no questions or comments from the interviewer could influence the interviewee’s reply. After 28 interviews, the researcher judged that the replies were saturated, and ended the data collection. All the recorded material was then transcribed for analysis. To assess the collected data, Content Analysis of inductive thematic type was used, as proposed by Bardin 19 , with a thematic representational approach. This method was chosen because it aims to understand both the manifested/explicit and the latent/implicit meanings within the replies, allowing the respondents to dictate the themes to be discussed. Content analysis relates to words and their meanings, with the purpose of understanding what is implied in the subject’s statements. Firstly, a reading of the transcribed data was carried out, to identify key excerpts to be analyzed. The quotes were chosen based on their proximity or distance to the concepts discussed in the recent literature. This was followed by three stages: 1) pre-analysis (organization and systematization of the initial ideas, hypothesis formulation and objectives), 2) exploration of the material (coding, classification and categorization) and 3) treatment of the results, inference and interpretation for the final analysis. The final data was revised by three other researchers with experience in qualitative studies 19 . RESULTS AND DISCUSSION A total of 28 students were interviewed, comprising 28.57% first-year students, 14.28% second-year students, 28.57% third-year students, 14.28% fourth-year students, 10.71% fifth-year students and 3.57% sixth-year students. The mean age was 25.53 years of age, ranging from 18 to 28 years old. Of the group, 39,29% were male and 60,71% were female students. Based on the analysis of the interview, the following themes and subthemes emerged: Understanding medical students’ limited knowledge about nutrition Through the interviewees’ replies, some major shortcomings were identified regarding what medical students consider to be a healthy diet. In our results, four subthemes were representative of the group’s knowledge about the subject: the need to have a balance of several sources of nutrients; eating at regular intervals; eating more natural foods, avoiding processed products; purchasing low fat/low sugar products. The need for balancing several sources of nutrients In this subtheme, a healthy diet was considered to be one in which there is a balance between macro and micronutrients, with an adequate intake of fruits and vegetables. I think it is keeping a balance between vegetables, fruits and carbohydrates and proteins. I think that’s it basically. (E17, fourth year) Although these concepts are well-known and widespread even among the general public, we noted that on various occasions, the respondents appeared to have a superficial knowledge on the subject: I don’t know. A balanced diet in the sense that you get all the nutrients you ... I have a very vague idea, you know? But ... but I think it’s something like that. (E11, third year) Despite the fact that a balanced nutrient intake is important to maintaining good health, the concept of healthy eating goes beyond the boundaries of chemistry and biology, and relates also to cultural habits, traditional values and the environment. According to the Brazilian National Policy for Food and Nutrition (PNAN), a healthy diet is defined as: An adequate eating pattern for both the biological and sociocultural aspects of individuals and the sustainable use of the environment. That is, it must be in alignment with age requirements as well as specific dietary needs; referenced by traditional food practices and the aspects of gender, race and ethnicity; available both within the physical and financial perspectives; balanced in quantity and quality; based on an adequate and sustainable production system, with as little physical, chemical and biological contaminants as possible [...] considering that there are other purposes to food than merely supplying for biological needs, since it has unique cultural, behavioral and emotional meanings that cannot be ignored.20 Studies have shown that medical students may present limited knowledge in the field of nutrition 21,22 . A research study evaluating students’ knowledge of general and clinical nutrition showed an average accuracy of 60 and 52 percent in the replies, respectively. However, when specific food categories were assessed individually, the margin of correct answers ranged from 17.35% to 77%, a clear indication of superficial and uneven knowledge on the subject 21 . From this perspective, our results highlight the limitations and lack of knowledge about healthy diet among respondents. The limited knowledge on the subject observed in our study may be related to the fact that the majority of the participants of this research (71.42%) were in their first, second or third years of the medical course. However, it is worth highlighting that in the university in our study, students do not take the discipline in Nutrition and Public Health until their third year, with a total of 58 hours, and they take other disciplines in the internship (from the 4 th to 6 th year) with 176 hours (2.37 % of the total training time as described in the Course Plan). The contents related to nutrition are distributed across different disciplines, such as Public Health, Internal Medicine, Pediatrics and Gastroenterology. Feeding at more regular intervals The theme of meal frequency was also present in the students’ replies. Eating at frequent intervals, not skipping meals and avoiding going for long periods without food were considered essential by the group for maintaining a healthy diet, as shown by the following excerpts: It’s knowing how to eat at the appropriate intervals, not going through long periods without eating, like, every two hours you should be eating something... (E8, fifth year) Doctors and health care professionals usually advise their patients to eat at shorter intervals. The scientific evidence corroborates this idea, showing the correlation between eating at shorter intervals and better glucose metabolism 23 , as well as bringing benefits for people with diabetes 24 and obesity 25 . There is, however, an important factor that must be acknowledged when eating more frequently: the availability of healthy snacks, since most snack products on the market can be considered low in nutritional value, being produced primarily with low-quality fat and simple carbohydrates 26 . In Brazil, frequent consumption of highly processed food increased by 300% in metropolitan areas, with 28% of total energy intake per household coming from ultra-processed products 27 . Eating more frequently can be beneficial to health, provided food choices are made consciously, especially snacks taken between meals. Barnes (2014) has shown that consumption of vegetables between meals is associated with a lower body mass index (BMI). However, only 1.4% of the participants in this study displayed this eating habit 26 . Therefore, when advising patients to eat at shorter intervals, it must be ensured that the patient is able to choose what they eat, and to eat sensibly, especially when it comes to snacking. Eating more natural foods, avoiding processed products The interviewees stressed the importance of a diet consisting predominantly of whole, fresh foods, with low intake of processed and ultra-processed foods. They also mentioned the consumption of organic food for maintaining good health. Trying to avoid eating stuff with lots of preservative, too processed... I think healthy eating has to do with fresh, unprocessed food. (E10, fourth year) It’s the most unprocessed, freshest foods as possible, erm […] without too much sugar, without many colorings, without lots of preservatives, as wholesome and with as many organic vegetables as possible. (E16, fifth year) We are experiencing rapid changes in the dietary patterns around the globe as the consumption of traditional diets are becoming obsolete due to the broad access to commercially appealing, low cost and foods with high palatability but little or no nutritional value. The effects of long-term, over consumption of ultra-processed food could damage the body in several ways, including the development of chronic illnesses through various paths, such as systemic inflammation, causing cardiovascular diseases 28 , asthma, and allergies 29,30 . The increased intake of ultra-processed food has also been linked to the development of dysbiosis, increased intestinal permeability, and Crohn’s disease in mice 31 . The discussion regarding the consumption of highly processed foods is essential in the healthcare context, since these products have substances which could be potentially harmful to the health of individuals. This is thoroughly discussed topic in the Dietary Guidelines for the Brazilian Population, which recommends that whole or minimally processed foods, with no additives, should be the basis of Brazilian’s diet 16 . Therefore, a more thorough discussion of the topic is needed in medical degree courses. Purchasing low fat/low sugar products Our results showed a trend among students, who perceived low fat/low sugar products as being healthy. This can be seen in the following statement: [...] low fat goods erm [...] buying everything low fat, you know? Low fat cream cheese, erm… skimmed milk. (E8, fifth year) Low fat/low sugar products are those with a reduced content of at least 25% of a particular nutrient, mainly sugar or fat. Another category of goods with low or zero nutrient content is diet products, such as sweeteners used to replace sugar 32 . In relation to the consumption of diet and low fat/low sugar products, some major points should be taken into account when purchasing these items. For example, the lack of regulations on marketing and labeling may lead the consumer to make mistaken choices. A US study showed that 23% of food labels claiming nutritional benefits had high levels of saturated fat and simple sugars. This may be due to misleading advertising that highlights the presence or absence of a particular nutrient in order to downplay components that might lead consumers to reject the product 33 . With regards to diet products and sweeteners, there is extensive scientific material available. However, the evidence supporting the use or misuse of these substances is often contradictory and inconclusive 34 . In his review, Wiebe et al . 35 questions the effectiveness of dietary sweeteners in glucose control, even for patients with diabetes. There is, however, a growing body of evidence linking long-term consumption of non-calorie sweeteners with the development of obesity 36 , type 2 diabetes mellitus, hypertension, cardiovascular disease, glucose intolerance and metabolic syndrome 37 possibly mediated by the gut microbiota 38 . Faced with the controversial debate of these products’ beneficial or harmful effects on the body, health care professionals should be up-to-date with the scientific evidence and prepared to point out the potential advantages and disadvantages of consuming low fat/low sugar and diet products, as well as their recommended maximum intakes, as well as being able to advertise the potential harmful effects to patients of over consuming these products, which unfortunately was not pointed out by our interviewees. Difficulty in helping patients change harmful eating behaviors From our interviews, we noted that students showed little knowledge or experience of helping patients change harmful habits. A total of four representative subthemes of the group’s knowledge on the subject were considered: knowing what to say but not being able to do it, the difficulty of changing patient’s habits, it being the dietitian’s job, and the importance of patients participating in their treatment process. Knowing what to say but not being able to do it When asked about how they would help their patients change their eating habits, the respondents acknowledged that they did not feel able to apply their advice in their own lives, as evidenced in the following excerpts: Oh, I think I could, even though I don’t eat properly myself, but I think it is to make them believe what we’re doing [...] I mean, telling them to do it, right? [...] Well, I hope I’ll be able to change their life habits even though I can’t change my own, I hope. (E4, fourth year) It was also noted in some replies that the ability or inability to practice healthy eating habits themselves could influence their conviction at the time of prescription. And then sometimes we blame the patient: well, you weren’t doing what I told you to… but we don’t do it either... I think that is not quite the right way, we don’t have a lot…, we don’t apply it to our own lives, you know? (e8, fifth year) First of all because I already made these changes to my own life, so maybe I have a true understanding, not just… I’m not… I won’t be just talking the talk because I have experienced this process and because I think… I know there is ... the patient is more likely to believe the doctor and maybe follow his instructions, so perhaps I can help like that. (E9, sixth year) Studies show that the prescription of healthy behaviors by physicians who practice a healthy lifestyle themselves may engender greater trust among patients and encourage them to stick to the advised treatment. This may be because these physicians serve as role models, demonstrating that it is possible to adopt healthy lifestyle habits. A doctor who shares his own experience with a patient, even if the outcome was not positive, may increase the chances of a change in habits. On the other hand, physicians who do not seek to practice healthy, balanced lifestyle habits in their own lives report difficulties in prescribing them 39,40 . It is known that many college students, including medical students, have inadequate eating and lifestyle habits that can potentially damage health, turning them into a vulnerable group for the development of chronic and non-communicable diseases 17,18,41 . However, doctors with healthy lifestyles are more likely to practice preventive medicine with their patients, and to do so more confidently 40 . For this reason, some studies propose the inclusion of health promotion practices throughout the university or college period, on the basis that it could encourage students to adopt healthier lifestyles, motivating them to share this with their patients 42,43 . The difficulty of changing patient habits The students participating in this study reported difficulty in helping their patients change their lifestyle habits, showing a lack of skill, confidence and knowledge at times when this type of approach is needed: We keep saying what they have to do and not really helping them to actually change, which I think is the hardest bit when it comes to changing lifestyle habits and which is a major part in treating diseases, isn’t it? In drugless treatments that is, which I believe interferes much more than just prescribing drugs […] to be honest we don’t have a strategy to help patients, so I can’t really tell you: ok, let’s come up with a strategy to help patients to change their bad habits, because there isn’t one and that makes me really sad. (E8, fifth year) We noted that students felt out of their depth when addressing the need to change behaviors. In our study, we evaluated the ability to change eating behaviors, but it could be applied to various other lifestyle changes, leading to health problems such as drug and tobacco addiction, alcoholism and sedentarism. Here, behavioral theories could be useful tools, if students were given more knowledge about them during graduation. Behavioral tools have proven to be effective in cases of addiction, improving eating habits, and following an exercise regime 44 . In addition to these theories, practitioners may teach their patients various strategies, such as self-monitoring, problem solving, setting goals, cognitive restructuring, stress management techniques, developing self-efficacy, and mindfulness techniques, among others 45,46,47 . Currently, about 50% of deaths in the US are due to poor lifestyle habits. Studies with primary health care services show that 97% of patients have at least one harmful lifestyle habit and 80% have two. Although these habits are potentially reversible, less than 5% of the US national health budget goes towards preventive medicine 48,49 . There is significant evidence confirming that physician counseling can be effective for changing patients’ habits, but for various reasons, such as the excessive demands on healthcare professionals, a lack of time, the drug current prescription paradigm in hospitals and healthcare centers, and a shortage of available personnel, preventive medicine and health promotion actions are not prioritized. Therefore, it is imperative to insert more tools in medical education to assist students in developing communication skills, enhanced listening, and empathy, in order to help their patients more to effectively change harmful behaviors 48,49 . It being the dietitian’s job Respondents’ attribute the role of advising patients on changing eating habits to dietitians. While this could favor a multidisciplinary approach, it could also exempt doctors from having a broader vision of patients’ health care. Oh, I don’t know... because it has a lot to do with the dietitians work and… I wouldn’t know how. (E6, first year) Depending on the case ... I would probably recommend a dietitian or perhaps he himself could try and create an eating diary, but I think the dietitian would be best suited, right? I don’t know. (E13, third year) It is a fact that professional dietitians are important when it comes to optimizing changes in eating habits. However, with the growing concerns over food related diseases, all healthcare professionals should be better prepared to address these matters. There are currently less than 100,000 registered dietitians in the USA, compared to 841,000 registered doctors. In Australia, the dietitian to doctor ratio is 3 to 1000; in the United Kingdom, ratio is 3 to 100 12,50 . The United States Department of Agriculture (USDA) recognizes that effective changes in the eating habits in the American population would require a joint effort from all sectors of society; individuals, families, communities, healthcare professionals, retailers and farmers, such is the scale and size of food related issues 51 . Every healthcare professional should be able to guide patients on healthy food choices, which are sustainable and don’t present risks to human and envinronmental health 16,52 . The importance of patients participating in their treatment process Finally, students remembered the importance of patient’s participation in their own treatment and healing process, as seen in this excerpt: I believe in a self-healing medicine, where the doctor is just facilitating health promotion, where health comes from within the person and this is reflected in their own body systems [...] one should always make them believe that they have the power to heal themselves, if they are ill they are the ones who can cure the disease, it’s their attitudes that will make the difference, and not the drugs I prescribe. That’s what I believe. (E12, second year) Patient Centered Care (PCC) was recommended by the American Institute of Medicine as one of six measures to be adopted in improving the quality of healthcare on the 21st century. Since then, there has been a growing interest in this approach by professionals in the field 53 . One of the premises of the PCC is to treat the patient as a unique individual, taking into account their wishes and points of view, and encouraging them to participate in the decision-making process. In the PCC the patient is seen in a holistic way; as a complex human being, and not merely as fragmented organs and systems, separate from each other. In addition to the benefits of the individual’s empowerment in their own healing process, the adoption of the PCC method could also result in lower healthcare costs, especially in the long-term 54 . However, the patient’s wishes should be taken into account even when deciding whether to adopt PCC or the prescriptive approach, as some patients prefer not to actively take part in the decision making process 53 . Despite promises of a more human and efficient healthcare service, the transition to this new paradigm would require a lot of effort by professionals, who would have to work in unfamiliar scenarios, and learn new skills, such as listening and talking to their patients more effectively, and helping them to overcome unhealthy habits. It is known that PCC can improve the quality of healthcare service; its concepts are well founded, and its methodology has already been proven effective. The challenge still remains to overcome a system that has, for some time, been claiming advocacy for change. Patient Centered Care is an approach that requires humility, dedication and openness to changes among the healthcare team, which is not always realistic 55 . This study points to a need to include new approaches and tools for future doctors in the medical curricula, so that they can be better equipped to help their patients change their eating habits. This study has some limitations; we interviewed only a small number of students, who were participating in another study, following a qualitative methodology to understand the problem. It is suggested that more research be developed, with different methodologies, in order to ensure a representative sample, including spreading the subject in a more balanced way, across different years of the course. More qualitative studies are needed on medical student’s perceptions about their ability to counsel patients on changing unhealthy eating habits, since this is so important to an individual’s health. This could lead to further research on potential interventions focused on teaching students effective techniques to work with patients and helping them to achieve better quality of life, resulting in less disease. FINAL CONSIDERATIONS This study reveals how the interviewed medical students show an apparent limited knowledge about what constitutes a healthy diet, both in their personal lives, as reflected in their own poor eating habits, and in their professional life in which they feel insecure in how to guide their future patients to adopt new habits for a healthier life. However, in the investigated group the majority of students were coursing from first to third medical year. Studies such as this are suggested to investigate students during internship years to verify if these findings still remain. The need for medical schools to promote the student’s health, both physical and mental, in response of a high-load course is noted. This could include health promotion practices aimed at the students themselves, encouraging them to adopt healthier lifestyles, especially healthier eating habits, so that they can share their own experience with future patients. This may benefit their professional practice, giving them greater confidence when giving nutrition guidance to their patients, as they will have already experienced and applied the principles in their own lives. Patient-centered care can be a way to address this system and help patients effectively switch to healthier habits, thereby reducing suffering and increasing quality of life. Empowerment through activities that welcome and support the student and the patient is therefore an essential tool for promoting behavioral change.
http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0100-55022019000100126&lng=en&nrm=iso
Family Medicine encompasses the entire spectrum of health care for patients of all ages through a holistic approach to each individual patient. The specialty believes that its role is to advocate for its patients health care needs and to utilize the entire health care team to advance the wellness of it patients. The profession places emphasis on wellness, disease prevention and health promotion, and education of the patient so they may be an active participant in their own care and respect for the individual regardless of cultural or social differences. The members of the department participate in the development and instruction of all academic courses of the COM, especially those of a clinical nature. The department provides preclinical and clinical training for medical students and family medicine residents at the college and its affiliated clinical sites. Members of the Family Medicine department actively practice osteopathic medicine at university-owned clinics and affiliated hospitals to deliver high quality patient care and to support the clinical education of our students and residents. Members of the Department of Family Medicine are actively involved in the College and interact with all the students of the College of Osteopathic Medicine. They play an active role in student mentoring and development, are prominent in the sponsoring of educational programs, medical outreach events and philanthropic activities, as well as student organizations and activities. The members of the department provide classroom instruction, small group facilitation, and are active in curriculum development and student evaluation. Its members are active in professional organizations that advance the science and practice of osteopathic medicine, advocate for the profession, the specialty and it patients. The Family Medicine department strives to improve the delivery of health care for its patients of today and tomorrow through research and scholarly participation with other members of the medical and educational community. In addition, many faculty members are active in community, social, religious and government programs. It is the mission of the department to provide outstanding osteopathic medical education for its students and residents, to refine and advance the skills and knowledge of it faculty, and to advance and promote the specialty of Family Medicine through the teaching of essential competencies and skills needed to provide high quality health care for all our patients. The Department of Family Medicine strives to improve the health of all patients and to assist in the training of highly qualified and compassionate osteopathic family physicians. It is successful in achieving this goal through its participation in educational activities as well as mentoring and role modeling for osteopathic medical students and resident physicians. The Department highly supports participation in faculty development events and provides opportunities for professional advancement, supports research and scholarly activities of its members, and encourages participation in domestic and international medical outreach events and other philanthropic activities.
https://osteopathic.nova.edu/do/departments/familymedicine.html
Life lessons include choose your own battles, fight the good fight and never allow others to fight your own battles. As an emergency physician, I often care for patients who are beside themselves rather than on top of their medical conditions. Patients’ capacity to make wise decisions is generally dependent on how well they understand the disease process in addition to potential risk factors, informed consent, family expectations and their ability to handle stress. Attempting to combat these divergent forces reasonably and wisely requires preparation and the experience gained from a few hard knocks. Oftentimes, making wise healthcare decisions is delegated to the person with the medical degree. While National Healthcare Decisions Day prompts people to combat healthcare battles by declaring personal wishes, surprisingly few people realize what wishes are most important to them. In most battles few people will surrender when their lives are at stake. This quandary allows patients to become gullible to false hope, high expectations and wishful thinking rather than making wise healthcare decisions. When patients allow others to fight their battles, I often hear family members assert, “He wants everything done.” Cognizant of Christ’s words of compassion, I am reminded to forgive them; for they know not what they do. Oddly, compassion tends to be geared towards perpetrators of undying intervention rather than those dying. Most combat fear with passion, enlisting a type of competitive edge through attaining grace under fire. Passion and grace are central elements inherent to both living well and making wise healthcare decisions. These armaments are necessary to engage battles and determine exit strategies. Passion and grace exemplify personal empowerment and contribute to patient empowerment. This enterprise emboldens patients to make reasonable wishes, allowing them to maintain control over their life and death, dignity and destiny. Reasonable wishes are both heartfelt and wise. To adjudicate and guide the process of making wise healthcare decisions, I wrote a book of heart-centric wishes titled WISHES TO DIE FOR. As wisdom and grace often come with age, many people tend to wait until later in life to make very tough decisions regarding their end-of- life care. I mostly witness elderly people become set in their ways and fearful of death. By having their wishes ingrained as convictions in the prime of life, people are more empowered to make meaningful healthcare decisions at the end of life. WISHES TO DIE FOR expands upon doing less in Advance Care Directives, but more importantly it encourages people to lead purposeful lives, reflected in their making wise healthcare decisions.
https://kevinhaselhorst.com/making-wise-healthcare-decisions/
Like any relationship, a provider-patient relationship also requires good communication and trust in order to work towards a common goal. In this case, that goal would be to ensure better health outcomes. This is why patient empowerment is exceedingly becoming a necessity in healthcare today. In fact, health policies in several countries around the world are implementing strategies to increase patient empowerment in order to get them more involved in their health care. What is patient empowerment? The best way to define patient empowerment would be to describe it as an inclusive practice that encourages patients to be actively involved in their providers’ health services. The aim of empowering patients is to help them develop self-awareness, self-care and promote the understanding that patients can be equal partners in their healthcare decisions. In a way, patient empowerment puts patients at the heart of health services so that they are able to derive the maximum benefits from it. Why is it important for providers? We know that healthcare evaluation relies on patient outcomes. But lately, the benchmark for this evaluation has undergone a transformation. It has become equally important for providers to consider external non-health outcomes such as patient satisfaction, engagement etc., to accurately rate the efficacy of healthcare delivery. What this means is that patients are looking to their providers to give them better accessibility and knowledge to learn and understand the decisions that go into planning their care. If we examine this more closely, it is evident that in some respects, the health outcomes and non-health parameters are closely linked. Meaningful engagement with patients helps them understand and participate in their care more proactively. This, in turn, improves treatment compliance and adherence, meaning patients end up getting healthy and as a result, they are more satisfied with their health provider. Many argue that there are pitfalls to empowering patients as it promotes self-care and reduces dependence on healthcare providers. While that may be true in some respects, it is still debatable whether empowerment is detrimental to better healthcare. On the contrary, it can also be argued that empowering patients will ensure that they engage better with their providers. This means they are likely to trust their providers more and perhaps even use tools like patient portals and other engagement platforms offered by their providers. The success of engagement lies in empowerment It isn’t just about health outcomes, patient empowerment also improves engagement between providers and patients, encouraging better communication and limiting the chances of misdiagnosis. The success of a patient engagement tool such as a patient portal, for instance, is solely determined by patients using it. In order for them to do that, there needs to be transparency in the information conveyed to them, in order for them to make informed decisions with their physicians and health providers. Hence empowering patients with the right information about their health, ways to care for themselves and divulging information regarding their treatment will go a long way in ensuring continued engagement with health providers. Doing this is likely to bring about meaningful use of your patient engagement portal. The Virtual Practice offers a web patient portal that supports patient engagement services like telemedicine, text consultations and remote patient monitoring. Sign up for free to start empowering patients and improving care. Here’s what providers can do If you’re wondering how you can get the ball rolling to change the way you interact with your patients and to get them to play an active role in their care, here are a few ideas. - Encourage patients to update and share their medication details Being able to share their medication history – including an account of medications being taken, previous prescription details and any contraindications, would be helpful for you while also helping patients stay conscientious about their prescriptions. Apps like Dosecast even remind patients about their medications and dosage with timely reminders, enabling them to be more diligent with their health care. - Encourage patients to update their allergies and previous health concerns Patients may not always remember to provide you with a complete picture of their health, including information about all their allergies and previous health concerns. Encourage them to update these details to help you make informed decisions about their care and prevent health complications. - Tell patients about sharing information from their wearable devices You can’t keep a watch on your patients all day. But they already have a smartwatch or activity tracker doing that every day. Apps like ContinuousCare allow patients to sync data from their wearable devices so that they can then share these vital parameters with their physicians. Most home health monitoring devices, like iHealth devices, also allow users to view and track their measured vitals on their mobile, allowing them to stay on track with their health goals. - Promote remote care to get patients to take control of their health One of the other ways to empower patients would be to promote self-management of chronic diseases. It has been hypothesized that self-reporting of health parameters, self-care interventions and the use of healthcare services can be improved significantly by empowering patients. Services such as the Virtual Practice’s Remote Monitoring allow health providers to engage and empower their patients by defining home care plans for patients with chronic illnesses. Patients update their health parameters periodically for review by their care providers, thereby managing their health at home and reducing the risk of unhealthy inconsistencies in their condition. In addition to getting providers and patients to actively communicate with one another between hospital visits, remote monitoring can also prevent health complications through continuous care and review – a boon for those with chronic illnesses. - Allow your patients to consult with you online Permitting patients to ask questions online provides them with the right information about their health, as opposed to relying on information from online sources. This empowers them to take the right decisions for their care and adhere to medical advice from their doctors. The growth of online consultation services like HealthTap has allowed patients to get answers to their health queries without having to wait for their next appointment, providing quicker and easier access to healthcare services. - Encourage patients to connect through video consultations Telemedicine has indeed been a boon in terms of increasing the accessibility of healthcare. For one, it has allowed patients to consult with their health providers irrespective of geographical location. Setting up a telemedicine service helps patients with chronic illnesses, senior citizens, and patients who may not be able to make it to their appointments. Providing these patients with an alternative means of connecting with their providers improves their health outcomes while ensuring patient satisfaction. Unlike Skype, Video Consultations allow doctors and patients to securely share and edit health records. - Educate to empower your patients Patient education can go a long way in helping patients understand important aspects of their health and care. Debunking myths, demystifying complex procedures and treatments and describing health-related concerns will allow patients to get a better grasp on options for their health care. Platforms like the Health Network and WebMD seeks to create a community comprising of physicians and patients to discuss relevant topics about health and well-being. Instead of relying on unverified external sources, patient on the Health Network can be rest-assured that the information and health tips available to them on the network are reliable. With specific mobile apps for doctors and patients, the Health Network can be accessed on your smartphone, making it even easier to educate and empower patients. Health providers being involved in educating their patients also has the added advantage of limiting self-diagnosis and treatment, which can have devastating consequences on patient well-being. - Allow patients to schedule appointments online Online appointment scheduling, offered by the Virtual PracticeTM and similar online appointment scheduling solutions allow patients to be more proactive in their care. Instead of waiting to connect with the doctor’s office by telephone, patients now have the option to book an appointment at a time convenient to them, at any time of the day. Automated reminders also ensure that the chances of no-shows are greatly reduced, helping providers to make the most of their day. The Virtual PracticeTM from ContinuousCare offers health providers and healthcare organizations the necessary tools to facilitate patient empowerment through patient engagement services like video consultation, telehealth services, remote care and practice management services like appointment scheduling and revenue management. Learn more.
https://www.continuouscare.io/blog/why-empowering-patients-is-important/
Aims: To explore on how patients experience the information exchange with healthcare organizations and how this relates to the six areas that constitute good quality care.Method: A qualitative approach inspired by Grounded Theory was adopted. Seven interviews with patients were carried out in the homes of patients.Conclusion: Healthcare does not always meet the requirements of Health and Medical Services Act with regard to good quality health. An effective exchange of information between health professionals and patients was found as a key issue for creating the conditions for good quality care. Delivering good quality care is a complex endeavor that is highly dependent on patient information and medical knowledge. When decisions about the care of a patient are made, they must, as far as possible, be based on research-derived evidence rather than on clinical skills and experience alone. Evidence based medicine (EBM) is the conscientious and judicious use of current best evidence in conjunction with clinical expertise as well as patient values and preferences to guide healthcare decisions. Following the principles of EBM, healthcare practitioners are required to formulate questions based on patients’ current clinical status, medical history, values and preferences, search the literature for answers, evaluate the evidence for its validity and usefulness, and finally apply the information to the patient. Information systems play a crucial role in the practice of evidence based medicine, by allowing healthcare practitioners to access clinical evidence and information about the patients’ health as they formulate their patient-care strategies. However, current information systems solutions are far from this perspective for various reasons. One of these reasons is that existing information systems do not support a seamless flow of patient information along the patient process. Due to interoperability issues, healthcare practitioners cannot easily exchange patient information from one information system to another and from one healthcare practitioner to another. Consequently, vital information that is stored in separate information systems and which could present a clear and complete picture of the patient cannot be easily accessed. All too often, units have to operate without knowledge of the problems addressed by other healthcare practitioners from other units, the services provided, medications prescribed, or preferences expressed in those previous situations. The practice of EBM is further complicated by current information systems that do not support practitioners in their search and evaluation of current evidence in everyday clinical care. Based on a qualitative approach, this work aims to find solutions for how future healthcare information systems can support the practice of EBM. By combining existing research on process orientation, knowledge management and evidence based medicine with empirical data, a number of recommendations have been initiated. These recommendations aim to support healthcare managers, IT–managers and system developers in the development of future healthcare information systems, from a process-oriented and knowledge management perspective. By following these recommendations, it is possible to develop information systems that facilitate the practice of evidence based medicine, and improve patient engagement. Practicing evidence-based medicine (EBM) and shared decision-making (SDM)along the patient process is important in today's healthcare environment, as thesemodels of care offer a way to improve quality and safety of care, patient satisfaction,and reduce costs. EBM is the conscientious and judicious use of current best medicalevidence in conjunction with clinical expertise. It also includes taking into accountpatient values and preferences to guide decisions about the care of individual patients.SDM offers a process that guides how a healthcare professional (e.g., a physicianor a nurse) and a patient jointly can participate in a decision after incorporatingthe body of evidence (the options, benefits and harms) and considering the patient'svalues and preferences. The degree to which healthcare professionals can practice EBM and SDM is dependentupon the availability of information about the patient (e.g., medical diagnosis,therapies as well as laboratory and administrative information) and medical evidence(such as medical guidelines). Patient information is a prerequisite for making decisionsabout the care of individual patients and it is evidence-based medicalknowledge, clinical expertise as well as patient values and preferences that guidethese decisions. Moreover, for patients to be able to communicate values and preferencesas well as participate effectively in their own care, they need to have a basicunderstanding of their condition and treatment options, and the consequences ofeach. Hence, they need access to the same information streams—in "patientaccessible"form—as their physician(s) and care team throughout their journey (process)in healthcare. However, making the right decisions about the care of individualpatients at the right time and place is a challenge for healthcare professionals. Due tointeroperability issues, existing information systems do not support a seamless flowof patient information along the patient process. Healthcare professionals are thereforeunable to easily access up-to-date information about the patient at the right timeand place. The situation is complicated further by the fragmentation of medical evidencein different repositories and its presentation by diverse providers, each withunique ideas about how information should be organized and how search enginesshould function. Limited or no access to relevant patient information and the bestmedical evidence about the benefits and risks of treatment options can result inflawed decisions and, more seriously, the suffering of patients. The situation also affectsSDM. If patients are not informed about their health condition, treatment options,benefits and risks or not given high quality information, e.g., becausehealthcare professionals do not have access to the best evidence, patients will be unIIable to assess 'what it is important to them', or they will make inadequate decisionsabout key issues. Consequently, it is almost impossible to practice EBM and SDM ineveryday clinical care. For EBM and SDM to serve their purpose, healthcare professionals and patients needinformation systems that provide quick and trouble-free access to all-round information.They also need information systems that can influence the patient/physicianrelationship and facilitate their pursuance of shared goals in the healthcare process,taking into account both illness and personal experience. Hence, based on a qualitativeapproach, this thesis proposes recommendations regarding the redesign of futurehealthcare information systems in ways that will facilitate, rather than hinder,the access to relevant information. One important recommendation identified is thatfuture healthcare information systems must support the core characteristics of EBMand SDM, in an integrated manner, and using the one without the other is notenough. However, such support requires the adoption of a process view on informationsystem development based on the patient's process. A process-oriented approachwith supporting information systems is thus vital for the support of an evidence-based practice where the patient is an important and active collaborator.Moreover, the challenges identified with regard to information system support arenot exclusively technical. Organizational culture, and the attitudes of healthcare professionalsto patient involvement are some of the biggest challenges facing healthcareorganizations.
http://his.diva-portal.org/smash/record.jsf?pid=diva2%3A513995
1. Advancing Healthcare Quality and Patient Safety - PatientCareLink (PCL) assists participating healthcare organizations to monitor and report progress on their efforts to continuously improve quality of care and patient safety. - PCL provides proven patient safety strategies and best practices to guide healthcare organizations to improve their processes and outcomes of care. - Our care providers share a broad range of patient care Success Stories that recognize best practices and demonstrate visible improvements in patient outcomes. - PCL affords healthcare organizations the ability to monitor improvement trends over time on specific endeavors. 2. Providing Hospital Staffing that Meets Patient Needs - Massachusetts and HARI member hospitals voluntarily make staffing plans available to patients and the public by posting them on the PatientCareLink (PCL) website. These annual plans describe staffing in each hospital clinical unit (adult critical care, adult step-down, adult medical, adult surgical, adult medical/surgical combined, behavioral health, rehabilitation and emergency department) by shift. - The plans also describe the factors that nurse leaders must consider in determining how to care for each patient, and offer information on care provided on an "hours per patient day" basis. Nurse leaders who are responsible for putting their facility’s staffing plan together gather information and advice from nurses and other members of the patient care team. - Hospitals also submit annual aggregate staffing data, producing a planned versus actual staffing report. These reports also provide the hospitals an opportunity to include explanations for variations between staffing plans versus actual staffing that may occur for a variety of reasons. 3. Making Healthcare Data and Performance Measures Transparent and Publicly Available - PCL-participating hospitals and home care agencies are committed to a common framework to assess and report healthcare quality. Hospitals have been publicly reporting the same nursing-sensitive, evidence-based measures selected from the National Quality Forum (NQF) since 2007. - Performance measures from the Centers for Medicare and Medicaid Services (CMS) Hospital Compare and Home Health Compare websites are also available on PCL. This provides consumers with additional hospital and home care performance data to assist in making informed healthcare decisions. - The PCL initiative supports and encourages partnerships among healthcare organizations and leaders of business, government, consumer groups, and others to promote access to high-quality, safe care. Such efforts include expanding health insurance coverage, sustaining the capacity of the healthcare system to deliver care, and identifying ways to assist providers of care to obtain and deploy new technologies to advance patient safety. 4. Empowering Patients and Families in their Healthcare Choices - PCL places important healthcare information in the hands of consumers. It is a transparent resource for staffing, quality and safety data from hospitals, home care agencies, government agencies and other independent sources. - PCL serves as a resource for information on a myriad of healthcare topics, all designed to encourage and assist individuals to participate in their care and make healthcare decisions that are right for them. Tools that can be found on the PCL website include advance care planning and healthcare proxies, information on preventing infections and/or hospital readmissions, and patient fact sheets on opioid risks. - These tools and resources are updated regularly and can be aids to promote health throughout one’s lifespan. 5. Promoting Development/Advancement of the Healthcare Workforce in a Safe, Respectful & Supportive Work Environment - PCL-affiliated organizations create hospital, home health and community-based initiatives and strategic partnerships to tackle the workforce shortages of nurses and other care professionals. Efforts include innovative "career ladders," residency programs, mentoring and preceptor opportunities, joint funding of nurse faculty positions in educational institutions, and employer initiatives to increase workforce diversity. Highlights of the MHA and ONL annual survey results for hospital nurse staffing in Massachusetts are also available through PatientCareLink. - PCL supports the work of the Massachusetts Action Coalition (MAAC), created following the Institute of Medicine’s Future of Nursing report and subsequent Campaign for Action. Goals of MAAC include fostering academic progression in nursing programs by creating accelerated pathways for nurses to achieve baccalaureate (or higher) degrees and promoting the integration of Nurse of the Future Core Competencies (NOFNCC©) in academic and practice settings partnerships. - PCL supports the sharing and adoption of recognized programs, practices and innovations that support performance excellence and a healthy and safe workplace. Examples include workforce health, wellness & recognition programs; safe patient handling guidelines and team communication tools such as TeamSTEPPS (Team Strategies and Tools to Enhance Performance and Patient Safety). - Our sponsors and participants support legislation and guidelines to promote workplace safety efforts and protect all patients and hospital, home health, and other provider employees from workplace violence. Participating caregivers monitor progress of efforts to improve the work environment. Examples of such 'practice environment assessments' include surveying caregivers and measuring improvement over time on specific employee satisfaction endeavors.
https://www.patientcarelink.org/the-five-guiding-principles/
This article is an important reminder that health literacy is crucially important as it enables individuals and communities to make informed, healthy decisions. Most health systems and institutions fail to deliver reliable and quality services to all, and the resources allocated are generally inadequate. This places more responsibility on people to manage their own health. Lopes points out how health is closely related to lifestyle, and how risky behaviour may jeopardise not just the wellbeing of individuals but impact entire population groups. Therefore, the call for adult education as a ‘core healthcare tool’ is important as a broader more holistic education could contribute substantively to a shift from predominantly reactive, responsive health care to preventive health care in which all people are supported when they take responsibility for health and wellbeing. Framing health education as adult education, Lopes suggests that ‘management of knowledge’ has to happen at the right time. Useful, here, is the allusion to relevance. However, taking his example and speaking from the South, many children are tasked with looking after older people and geriatric care is very much part of their daily lives. Furthermore, child to child, and mother to child programmes have shown how intergenerational and family literacy is extremely successful because people are learning together, with and from each other. One evidence often cited is the child that advises her mother how to deal with the baby sibling who suffers from diarrhoea, by making up an oral-rehydration drink. Further, I am concerned that the focus of this article is primarily on individuals. Acting for health usually involves more than one person – and similarly, educating and learning for health should target collectives (family, household or communities).An encouraging example of integrated, holistic health education comes from the South in the form of Community Health Clubs (CHCs) – pioneered in Zimbabwe, Sierra Leone and elsewhere. CHCs are formed bottom-up by members who share social, economic and physical conditions; they also have common experiences of gangs and drugs, unemployment and violence, sickness and disease. Yet, if these experiences are common, they are not shared. Across and even within households there is often distrust and suspicion as people compete for scarce resources. The first task, therefore, is to build a basis of trust and respect, and weekly meetings attended by young and old, women and men interested in health and wellbeing offer opportunities to meet, to learn, to construct useful knowledge together, and collectively make decisions about how to address particular issues, identified together.Sessions also offer welcome intellectual stimulation and boost confidence, as all participants realise they are knowing subjects with contributions to make in the process of exploring, analyzing, understanding and applying new lessons. In the process, CHC members enter into a socialcontract and establish a system of mutuality, accountability and transparency through dialogue and common projects. From my experience of working in health education, I would suggest that 3 factors must come together so that education can, indeed, contribute to saving lived: Firstly, health education must be holistic, considering the conditions of time / place of not just the individual but him/her within the context of their daily lives within communities. Secondly, any education (adult, child or community) must build on existing knowledge, habits, livelihood strategies, and be radically participatory, bottom-up – that is, ensure strong participation in horizontal relationships through dialogue. medical hierarchies block ordinary peoples’ access to health practitioners through top-down attitudes and one-way communications that intimidate patients by treating them as victims and objects. Giving ‘lifestyle’ directives that are grossly out of touch with social, economic, political, environmental and cultural factors of patients is not helpful. For example, the poor nutritional status of many women in Bangladesh is directly tied to patriarchal relations: if she is not in a decision-making position to choose what to grow or what to eat, and how much of each nutrient each household member is allocated, how can we make the assumption she needs information about nutrition and then blame for her underweight baby? Thirdly, In the final instance, education will always just contributeto saving lives as it cannot replace changes to the structural material conditions that must be in place so that people can act on their informed decisions. Addressing the root causes of poverty and inequality must be a first priority. Community-based education can contribute substantially towards this by modelling relations and processes.
http://virtualseminar.icae.global/?p=556
Warning: more... Generate a file for use with external citation management software. Advances in chronic and genetic disease management and technology create new challenges for healthcare professionals and patients in making informed decisions. The growing interest in children's involvement in their own healthcare decisions and a rebalancing of child and adolescent rights and responsibilities compounds these challenges. This article presents an overview of research and standards of practice regarding children's participation in research and healthcare decisions. Further research on children's competence to participate in healthcare decisions is recommended. Reasons for and against children's increased involvement in healthcare decisions are included. There is a preponderance of support for involving children in the process, and a dearth of well-articulated reasons to exclude them. National Center for Biotechnology Information,
https://www.ncbi.nlm.nih.gov/pubmed/12795064
There is no one-size-fits-all answer to the question of how to deliver high quality patient centred care in a cost effective way. However, there are a number of key principles that can guide healthcare organisations in their efforts to provide high quality care at a reasonable cost. The first principle is to focus on the needs of the patient. This means that care should be tailored to the individual, rather than being delivered in a one-size-fits-all manner. By understanding the unique needs of each patient, healthcare organisations can ensure that they are providing the most appropriate care, which will ultimately lead to improved outcomes and reduced costs. Another key principle is to focus on prevention. This means that efforts should be made to prevent illness and injury before they occur. By investing in prevention, healthcare organisations can avoid the need for more costly interventions later down the line. Finally, it is important to remember that quality and cost are not always mutually exclusive. It is possible to deliver high quality care in a cost effective way, but this requires a concerted effort from all involved. By working together to focus on the needs of the patient and investing in prevention, healthcare organisations can deliver high quality care in a cost effective way. Primary care is frequently erratic in the way it handles chronic health conditions. Researchers investigated the cost-effectiveness of the 3D intervention, which was developed to improve the way care is delivered. The economic evaluation assessed the cost per quality-adjusted life year (QALY) gained by both the National Health Service and personal social services in this context. As the population ages, the number of people living with multiple chronic health conditions (multimorbidity) is increasing in developed countries. Despite having a large sample size, estimates of healthcare costs for this type of intervention have a wide range of cross-correspondencies. A randomized controlled trial was conducted as part of this economic evaluation to compare and contrast the benefits of managing multimorbid patients in primary care. The phrase 3D is all about wholeness, and it is also used to refer to various dimensions of health, depression, and drugs. Although there was no difference in health-related quality of life between the 3D and 3D interventions at 15 months, the intervention improved patient-centered care. The pragmatical 3D cluster randomized trial compared the effectiveness and cost-effectiveness of a complex intervention with usual care in 33 general practices across Scotland and England. Patients with multimorbidity, defined as those who have three or more chronic conditions, were the study’s primary target group. It was determined that any use of resources that was related to the participant’s health condition was relevant. The data regarding medications prescribed and tests and investigations into them was retrieved from the records of the patient’s primary care physician. To track the resources used to deliver training programs to doctors, nurses, and receptionists, attendance records were kept for each staff member. We also looked at the costs of transportation to and from doctor’s appointments as well as medication costs over the counter. The cost of prescription drugs were calculated directly from GP records and the British National Formulary estimate. The NHS used prescription charges as an alternative to medication costs for patients. All patients reported significant costs for over-the-counter medications, as well as therapies and treatments. It is necessary to adjust for inflation where possible in order to report all costs in 2015. The data was analyzed using Stata 14.2. The trial’s overall mean costs for the NHS/PSS and SEs for both arms were calculated. Using cost and QALY data, the data was combined to calculate an incremental cost-effectiveness ratio (ICER) and a net monetary benefit (NMB). We used established NICE thresholds of £20 000 for QALi gains and £30 000 for 3D costs to estimate whether 3D cost-effectiveness was possible. The cost-consequences analysis was based on available cases, which differed in number depending on which type of healthcare resource or outcome was used. It was also necessary to conduct a complete case study in order to assess the imputation process’s impact. The evaluation results are reported in accordance with the Consolidated Health Economic Evaluation Reporting Standards statement. The resource-use questions were not answered by all participants in the 3D trial. The health of participants who had not provided data was significantly worse at baseline (mean 95% CI). On a five-point scale, EQ-5D-5L had a score of 0.453 (0.422 to 0.485). The intervention arm was more expensive in terms of total health care costs, with NHS/PSS figures showing a total cost per patient of $126. Figure 2 depicts the cost-effectiveness acceptability curve, which indicates how likely it is to achieve cost-effective treatment at a set of values. In a sensitivity analysis, costs and outcomes that were not discounted were used to calculate the likelihood of cost-effectiveness of the 3D approach at £20 000. According to NHS and PSS estimates, the overall cost of usual care was slightly higher than the cost of other types of care. The costs of arms ranged from similar perspectives and no cost group was significantly different. In the long run, the net monetary benefit was modest, but it demonstrated that society is willing to pay for the benefits that can be obtained. As a result of the study, clinicians who care for patients with multimorbidity have more evidence for their effectiveness. The use of care homes, which can be extremely expensive and contribute significantly to the cost of social services, was not included in the economic evaluation. This was the largest randomized trial ever conducted to evaluate intervention for multimorbidity. The findings, however, are subject to significant uncertainty. Patients’ health-care quality of life is usually unaffected by organizational changes in primary healthcare. The cost of secondary care is skewed by a small number of patients who are paying a high price. Despite predictions of reduced appointment attendance, this was not achieved, primarily due to patients attending 3D reviews rather than single-condition reviews. During the trial, it was not observed that reducing the number of prescriptions issued could lead to lower costs. It is possible that measuring outcomes beyond health would be preferable. There is no solid evidence of the effectiveness of the 3D intervention in terms of cost savings. The cost differences and outcomes are consistent with chance, and the uncertainty is quite large. There can never be a single factor that determines whether or not the intervention is effective. Researchers will need to consider including alternative economic outcome measures in future work if they want to use the EQ-5D. How Does Patient-centered Care Reduce Cost? Credit: SlideShare Patient-centered care is a healthcare delivery model that focuses on providing care that is tailored to the individual patient’s needs and preferences. This approach has been shown to improve patient satisfaction and health outcomes while also reducing healthcare costs. One of the ways patient-centered care reduces costs is by improving communication between patients and their care providers. When patients are actively involved in their own care, they are more likely to adhere to treatment plans and take their medications as prescribed. This can lead to fewer hospitalizations and emergency room visits, which can save both the patient and the healthcare system money. Another way patient-centered care reduces costs is by helping patients manage their chronic conditions more effectively. When patients have a good understanding of their condition and are engaged in their own care, they are more likely to make lifestyle changes that can help improve their health and prevent exacerbations of their condition. This can lead to fewer doctor’s visits, hospitalizations, and emergency room visits, as well as lower overall healthcare costs. According to the Institute of Medicine, patient-centered care is a significant step toward improving U.S. health. Patients who received patient-centered care experienced significant reductions in service use and costs, according to the findings of the study. In order to determine whether the patient received center care, it was important to look for factors such as family and social history, nutrition, exercise, and health beliefs. For a long time, the paradigm for high-quality interpersonal care has been based on patient-centered care. It is common for physicians to elicit and understand patient symptoms in order to implement the patient-centered approach. According to a UC Davis study, a single visit to the hospital during a year of care was not associated with a higher rate of illness. Is Patient-centered Care Cost Effective? Person-centered care is the preferred method of providing healthcare in terms of healthcare providers. The cost-effectiveness of person-centered care was estimated to be 93%. Does Patient Engagement Reduce Healthcare Costs? The involvement of people in their health and health care is associated with better outcomes and lower healthcare costs, according to a study conducted by George Washington University, the University of Oregon, and Fairview Medical Group. The High Demand For Physicians And Nurses Abroad Overcapacity in the US healthcare system has not been a factor in the rapid growth of international medicine. Because of the high demand for physicians and nurses, a growing number of international medical schools are opening around the world. By providing more affordable tuition rates and diverse student populations, these schools are better positioned to compete with traditional US schools. Furthermore, as more international patients seek medical care, hospitals and clinics must become more efficient in order to provide the best possible care. What Are The 5 Key Elements To Patient-centered Care? Credit: SlideShare The Picker Institute has identified eight dimensions of patient-centered care, as stated in its research: 1) respect for the patient’s values, preferences, and expressed needs; 2) information and education; 3) access to care; 4) emotional support to alleviate fear and anxiety; 5) family and friends involvement Care should be individualized to each patient, not optional. Quality of patient care is affected by a variety of factors. Instead of attempting to treat each patient in a one-size-fits-all fashion, an ideal treatment model emphasizes patient involvement. The most important qualities for a good doctor are a friendly environment, a trained staff, and evidence-based medicine. Insurance companies may be unable to cover a patient who has been incorrectly classified as a chronic patient due to paperwork errors. A patient-centered model can only be ended once a patient leaves a facility. Monitoring a patient’s recovery after initial treatment is critical. It is possible for a recovery to be slower than expected or for an unexpected issue to arise. It is critical that you continue to support yourself as you recover from illness. A patient is followed closely by a facility so that everything goes smoothly. Patients deserve excellent medical care, and it will not stop until their quality of life is as high as modern medicine can provide. To be empathy-like, you must explain the situation clearly and avoid creating confusion. Providing The Best Possible Care For Individual Patients A person with cognitive impairments or who is bed-ridden is more vulnerable to this. In these situations, complete reliance on the patient’s abilities may be detrimental, as the patient may be unable to make informed decisions or communicate effectively with caregivers. Furthermore, the caregiver should be capable of performing a wide range of tasks in order for the patient’s care to be tailored to their needs. Participate in the patient’s emotional development as well as facilitate communication with him. Active listening is a term that refers to this type of care. It is essential for caregivers to learn this skill because it allows them to better understand the patient’s needs. A patient-centered care system aims to provide the best possible care for each patient. Engaging the patient in the decision-making process as well as ensuring that their needs are met are two examples of this. This goal must be met with a variety of factors. It is critical that the patient is involved in all aspects of his or her care, that emotional support is provided, and that communication is open and simple. High Quality Patient-centered Care Patient-centered care is a type of healthcare that is focused on the individual patient and their needs. This type of care is based on the belief that the patient is the best source of information about their own health and that they should be involved in all decisions regarding their care. This approach to healthcare emphasizes communication and collaboration between the patient and their healthcare team in order to ensure that the patient’s needs are met. Patient-centered care has been shown to improve patient satisfaction, communication, and overall health outcomes. When patients feel like they are involved in their own care and are able to communicate openly with their healthcare team, they are more likely to be satisfied with their care and to follow treatment plans. This type of care can also lead to better health outcomes as patients are more likely to adhere to their treatment plan and to take an active role in their own health. When a patient is cared for in a patient-centered manner, health care providers and professionals must actively engage in their understanding of what the patient values. It is possible to gain that understanding through methods, but they are more widely used. Patients are frequently the only ones who can assess the quality and effectiveness of many aspects of healthcare. Quality of care is jeopardized when the patient is unable to understand or remember what is being given to them. It is generally agreed that the quality of care in the United States and other countries is poor, in part because people believe it to be so. To create a patient-centered health care system, you must prioritize leadership values and human resources policies. The same foundational strategies that create successful organizations are used to create workplaces that are safe, excellent workplaces, and financially stable. Physicians believe they are capable of understanding the symptoms of illness. Understanding how to treat a chronic disease or how to deal with a sick body are not the same things as knowing how to diagnose or treat an illness. Patients’ experiences are valued, as well as their knowledge of how the healthcare system functions so efficiently to meet their needs. It is not common practice to use the term “teach back,” but it is advantageous. The issue of people not knowing the importance of taking their prescribed medications is one of the most significant factors for people not taking them. In ambulatory care, there is a significant gap in understanding between patients and doctors about why they are taking their prescribed medications. The concept of transferring trust is critical to the wellbeing of every patient, regardless of his or her status. The cost of in vitro fertilization (IVF) treatment in Israel is very low because the government provides health insurance coverage. Medina-Artom and Adashi’s research involved interviews with IVF patients and providers in eight of Israel’s 25 IVF treatment units. According to the researchers, providers tend to underestimate the needs of fertility treatment patients. According to a study conducted by the Foundation for informed medical decision-making, physicians and patients appear to have concordance in their treatment of breast cancer. Patients were more likely than clinicians to avoid wearing a prosthesis after mastectomy reconstructive surgery. The majority of patients were less likely than providers to consider ‘keep your breast’ as a top priority in deciding on surgery. Arabs’ satisfaction with their physicians is lower because of their communication skills, manners, and time spent with them. Improve the communication skills of your doctor if you want better patient outcomes, according to Hayek et al. Conducting focus groups or interviewing Arab patients will help you gain a better understanding of the specific actions your doctors must take. It has saved thousands of dollars by creating meaningful communications that achieve the intended goal without being ignored or garbage cans overflowing. 49 countries have signed on to the International Patients-Centered Care Initiative as of now. We can’t wait for the day when the clinical paradigm shifts from concentrating on what matters to focusing on what matters to us. An article about the author’s research can be found in the journal Health Res. In The Commonwealth Fund’s (pub#969) 2015 edition of The Commonwealth Journal of Medicine (50(6):1850-67), there are six pages. Sperling D and Pikkel RB. Sperling D and Pikkel RB. Accreditation is used to promote patients’ rights at hospitals. In the future, the Isr J Health Policy Res. 20209(1):47-47, with an emphasis on health policy and public health. Do we really understand the lab test results accessible via the patient portals? The Isr J Health Policy Res. is a scholarly journal that focuses on health policies. The article was published in the Journal of Applied Linguistics9(1):58. Do patients and providers agree about the most important facts and goals for breast reconstruction decisions? The Ann Plast Surg. journal. In 2010, 64(5):563-606. Schoenbaum is the Special Advisor to the President of the Josiah Macy Jr. Foundation, a grantmaking organization in the United States that works to improve health professions education. He is an associate editor of the Israel Journal of Health Policy Research and has previously worked in medical practice, medical management, epidemiology, and health services research. Describe What Delivering High-quality Patient Care Looks Like To You To achieve high-quality health care, we must provide it in an effective, safe, patient-centered, timely, efficient, equitable, and transparent manner, in which professionals are respectful, communicate clearly, and involve patients in decision-making. In my opinion, every patient should be treated as an individual, with his or her own distinct preferences and needs. To make the best possible care of my patients, I strive to get to know them as well as to get to know them well. A patient should be well-cared for in order to feel at ease and happy. Students at Care Hope College will learn about a variety of patient care technician jobs. Quality patient care can have a significant impact on the health of patients. This type of exercise may improve the health of people suffering from serious illnesses like cancer. They must be able to administer immunizations and take blood pressure while also communicating effectively and compassionately with patients as part of their job. A patient-centered experience entails creating a safe, comfortable, and stress-free environment for patients and their families. Patients who have a positive experience are more likely to achieve positive clinical outcomes. Understanding and measuring the patient experience can help improve an organization’s overall performance. We want to provide exceptional patient experiences through iPro Healthcare’s CARES program. Make certain that your hospital is clean, comfortable, and tailored to your patients’ needs. When a patient experiences a positive experience with a medical facility, he or she will become more trusting of the organization. A social worker’s visit can improve the mental health of a patient while also motivating them to continue treatment. In the United States, approximately two out of every three older people rely on caregivers who are not paid. Visiting a doctor with family members can assist you in gaining a better understanding of the factors that influence your loved ones’ health. It is critical for a doctor to maintain a close relationship with his or her patient’s family in order for them to be healthy. It is critical to provide elder patients with trusted family members who can assist them in maintaining good health, eating well, exercising, and taking their medication as prescribed. ChenMed physicians work closely with patients to ensure that they are healthy and well enough to leave the hospital. The Importance Of Quality Healthcare Quality healthcare is required for everyone, regardless of economic status. Governments, health care providers, and the people they serve must collaborate to provide high-quality healthcare. We can improve the health system if we all work together to ensure that it is safe, effective, people-centered, timely, equitable, and efficient. How To Promote Patient-centred Care There is no one-size-fits-all answer to this question, as the best way to promote patient-centred care will vary depending on the individual needs of the patients and the care setting. However, some general tips on how to promote patient-centred care include: -Encouraging patients to be actively involved in their own care, including making decisions about their treatment and care plan -Fostering open communication and collaboration between patients and their care providers -Respecting patients’ autonomy and preferences -Tailoring care to each individual patient’s needs and preferences -Ensuring that patients have access to the information and support they need to make informed decisions about their care Who Patient-centred Care Patient-centred care is a healthcare delivery model that puts patients at the centre of their own care. This means that patients are actively involved in making decisions about their treatment and are involved in their own care plans. This model of care has been shown to improve patient satisfaction, improve health outcomes, and reduce healthcare costs. Primary Care Consultations Primary care consultations are an important part of healthcare. They provide a way for patients to get to know their doctor and to discuss their health concerns. They also allow doctors to get to know their patients and to develop a plan of care.
https://www.excel-medical.com/how-to-deliver-high-quality-patient-centred-cost-effective-care-2/
Karya Kares is looking for a Full Time Nurse Practitioner for our community wellness clinic located in Houston, Texas. This position will be expected to work 1-2 weekends a month. The Karya Kares clinic provides 100% free basic healthcare check ups while educating patients about healthy eating habits, healthy recipes, and education on prevention versus treatment. Our goal is to redefine healthcare by offering free basic healthcare and simultaneously educate our community about nutrition to prevent disease. This clinic is registered with the Texas Medical Board, we have 3 physicians on the board who are available at any time for support, and we have medical malpractice insurance to protect our staff. And since we are not prescribing medicine and offering preventative care, we are in accordance with the Texas medical rules. Responsibilities: Cultivate a climate of trust and compassion for the patients. Comply strictly with medical care regulations and safety standards. Records physical findings, and formulates plan and prognosis, based on patient’s condition. Provides written home-going instructions for healthy living. Collaborates with Physician and Nurses to prepare comprehensive wellness patient care plan as necessary. Educates and coaches nursing staff on best nursing practices. Educate patients on healthy behaviors and decisions. Annual wellness visits and health risk assessments, which require a holistic view of health and a focus on thoughtful, accurate, and specific documentation Requirements: Master’s Degree required OR commensurate experience and satisfactory completion of NP licensure. Current RN and NP licensure in state of practice to include prescription authority or the ability to obtain prescriptive authority.
https://network.symplicity.com/houston-tx/nurse-practitioner-karya-kares-clinic/BC8329E91AC945B99BFA15901E4B13EA/job/
THE PSYCHOLOGICAL HISTORY - A MYTHICAL NARRATIVE? Albert M. Drukteinis, M.D., J.D. Much has been written in the legal and psychiatric literature recently regarding the repressed memory syndrome, its validity and reliability, and its use in by-passing the statute of limitations. (see MAR, Vol. 5 No.5, 21/26/1997) However, there has been less inquiry into the reliability of memories that have not been repressed and that form the basis for psychological histories as told by a patient or client. Every day in thousands of psychologist offices, patients relate psychological histories - detailing the background of their life, significant events and traumas. They also relate their reactions to those events, as well as their opinions regarding the most relevant and influential aspects of their histories. The relative contribution of their own behavior and that of others is apportioned, and blame is often assigned. But, how accurate are those accounts and interpretations? How reliable are the memories that we think we have, and that we believe have not been forgotten? Every psychological history is a review of early development, familial dynamics, important relationships, work history and adjustment, personal stressors in crises, marital conflicts, etc., and, at least in the first instance, relies on the account of the narrator. But, every divorce lawyer, for example, knows how varied the accounts of the spouses in a marital contest are about what happened in the household and in the marriage. Similarly, every employment lawyer knows that the account of an employee, or employer for that matter, must be weighed in the context of more complex organizational dynamics. And every psychiatrist, psychologist, and mental health provider should know that there can be another side to the story that is presented by their patients. Indeed, over the course of therapy, alternative interpretations and impressions of events may be brought out in order to help patients grow in a deeper understanding of themselves and their life. Unfortunately, however, the therapeutic alliance that is formed and the empathy which is a natural phenomenon of good therapy, leads to identification with the patient's account. With time, the narrative of the patient is reconstructed or reinforced by the narrative of the psychologist, and takes on a life of its own which may be far removed from historical events. A few specific examples may be necessary here. Patients describing their premorbid home life as happy and content, may be ignoring or denying secrets within the household which all the members mutually held. The secret of having an alcoholic parent is a common example of this. On the other hand, descriptions of traumatic incidents at the hands of an alcoholic parent may overshadow in the patient's memory a myriad of other incidents and experiences throughout development and in relationships which were even more instrumental in personality formation. The traumas from the parent become a natural and convenient focus for blame. The psychoanalytic stereotype that everyone's problems stem from their mother, illustrates this in jest but is practically not too different from what occurs in the construction of psychological narratives. Another example might be when patients describe an oppressive boss who is too demanding, unfairly critical and who creates intolerable stress at the job. While possible, the patient may be unwilling to face a pattern of his or her own poor work performance and personality disturbance which was present not only in this but in other employment settings. Often, psychologists will not adequately scrutinize generalized statements such as: "I was abused throughout childhood...my parents neglected me...my wife is controlling and demeaning", or in the alternative, "I've always been a respected worker...I never had problems before the accident...my home life was happy." Although such statements may reflect a general impression which could be substantially accurate, they are too readily accepted without more detailed inquiry and become part of the psychological narrative, which continues to be retold as fact. Inaccurate generalized statements may have nothing to do with conscious fabrication or deliberate misrepresentation, but may only be due to memories which are vulnerable to distortion. There's a popular adage that if you have ten eyewitnesses to an event, you will have ten different accounts. While this may be an exaggeration, most criminal prosecutors who have accumulated dozens of witness statements will attest to its near truth. Those statements invariably have significant contradictions, not only in the factual accounts, but even more in the subjective impressions of motive, malice, temperament and predisposition, and blame. This might appear as if memories have no validity. This is not so; only that they are subject to distortion by time and various influences. Memory can be generally divided into two steps, that of recording and of retrieving This complex process can be outlined, from one perspective, as follows: (a) The recording of events perceived to create memory is never a pure step, but inevitably involves some interpretation of the event being recorded. That interpretation comes from previous memories that have been recorded and now are retrieved simultaneously to help in the interpretation. So the recorded information is automatically altered as it enters. (b) When retrieving information which has been previously recorded, once again, it will not be just the original perceived events, but will include the earlier interpretation. Plus, the current retrieval involves a selection of only portions of recorded information based on current needs, feelings, and context. The newly retrieved information is, therefore, altered even further from that originally perceived. (c) The process of retrieving, with its multiple layers of influence, now itself rerecords the memory, and at some future time this re-recording may appear to represent the original recorded information when in fact it has been subject to significant alteration. This process repeats itself adding new layers of influence each time. Once the tale has been told dozens of times, the final product may be a distant shadow of the original perceived event. Researchers have shown that memories are influenced by decay over time as well as by interference. Biological processes, of course, play a major role and a number of brain conditions are known to affect memory. For example, in senile dementia, the failure of memory retrieval, especially of recent events and experiences, causes patients to "fill in the blanks" or to evade a subject so as not to appear stupid. Traumatic brain injured patients, similarly, may learn to guess or approximate responses out of embarrassment for their deficit. A dramatic example of brain injury and memory distortion is a condition known as Korsakoffs psychosis, caused by chronic alcoholism. Here, patients will confabulate detailed and often colorful accounts subject to suggestion, and accept their own confabulation as reality. Psychological processes also distort memory. This can be divided into two broad categories, personal myth and memory constructionl. The personal myth is a fundamentally distorted narrative of a person which has been accepted as reality as a theme that defines the individual to himself or herself. Personal myths are how we want to see ourselves or how we have learned to see ourselves over time. This can be an idealized inflated self view, or a self deprecating one. It can involve heroes and villains and mythical struggles. Often it leads to rich detail in the recollection of events that are consistent with the myth. Where memories are faulty, they are supplemented by association with memories that are retained in order to reinforce the theme. For example, in the highly charged time of adolescence, good or bad actual memories may, by association, lead to correspondingly good or bad false memories of events that are not recalled, but appear to be correct and consistent with the theme. Now, one's adolescent period is represented in memory by "numerous" events and feelings of a particular nature which are forever etched in the same theme. Memory construction takes place with or without a theme and is influenced by numerous factors. Elizabeth Loftus and colleagues have shown how leading questions can significantly alter memory reports, and post-event misinformation can distort the memory of an original event. She has shown through her research the dramatic influence of suggestibility to eyewitness testimony. This is particularly prominent when the source of a memory has been forgotten, i.e. was it seen, heard, or just imagined? Here, post-event misinformation is a powerful generator of erroneous memory of the events. In addition, there are numerous biases that occur through retrospection, when an individual's current attitude and information now available influence how things are recalled. The environment in which the individual retrieves the memory must also be scrutinized. A significant example of this is when hypnosis or a hypnotic setting is used for memory recollection. Although widely claimed to have a role in retrieving forgotten events, hypnosis also has a significant potential for inducing false memories The person's current mood can also exert a significant biasing effect on memory retrieval, so that information that is consistent with thecurrent mood tends to be well remembered, but information that is not consistent is poorly remembered. Finally, though memories are so susceptible to distortion, people often have a great deal of confidence that their false memory is accurate. Even with highly emotional events such as the assassination of President Kennedy or the Challenger disaster, studies have shown that memories are subject to distortion while people express assuredness that they are recalling correctly the event, where they were at the time and their reactions to it. Other studies have shown that both children and adults who are suggested false memories, will later be convinced that the events surrounding the false memories actually occurred. It follows, therefore, that when a particular memory is necessary or when an individual is invested in what that memory represents to them, they may stick tenaciously to the truth of their assertion. But, that may not represent an accurate account of events or their history. The importance of memory distortion and the creation of mythical narratives is clear if psychologists are to have a good understanding of their patients. It is even more important if that account is at issue in litigation. The circumstances of memory retrieval must always be scrutinized and generalized statements must always be dissected. This is a painstaking and time consuming process which is often met by resistance on the part of the patient or client. Yet, in the final analysis, both psychological treatment and litigation will be enhanced if it is done. (see Schacter, D. S.: Memory lDistortios1: How Minds, Brains, and Societies Reconstruct the Past. Cambridge, MA, Harvard University Press, 1995.) back to the top... Return to Online Library... ©2005-2012 New England Psychodiagnostics / Dr. Albert M. Drukteinis all rights reserved 1750 Elm Street - Manchester, NH 03104-2943 PHONE Manchester, NH (603) 668-6436 or (603) 668-1495 Woburn, MA (781) 933-7768 Portland,
http://psychlaw.com/LibraryFiles/PsychologicalHistory.html
When: |08 Feb 2020 through 09 Feb 2020| | | CFP Deadline: |15 Oct 2019| | | Where: |London, United Kingdom| | | Website URL: |https://oralhistory.lcir.co.uk/| | | Sponsoring organization: |London Centre for Interdisciplinary Research| | | Categories: |Arts & Humanities > History| Event description: For decades, oral history was considered less than scholarly, leading to its exclusion from several history books; thus valuable first-hand experiences and information that could alter historical truth were neglected and ultimately lost to oblivion. Our conference wishes to challenge the pervading view that oral testimony can lead to false representation of historical events and underline the significant support it can provide to historical research, especially in lieu of written documentation. The journey of a memory through time may change, transform or even become distorted from its primary form. Oral testimony requires a multilevel examination and verification so it can be considered legitimate and useful as historical information, but despite these difficulties, oral tradition can have the power to present an entirely new perspective on an event, future generations can then interpret it freely. The conference will focus on the connections between oral history, collective memory, and individual memory. Whether from a historical, social, or even psychological perspective, we wish to engage scholars in a multidimensional and interdisciplinary approach in order to deeply explore all aspects of this valuable and fascinating area. We are committed to creating a welcoming space for discussion, collaboration, and exploration of oral history’s potential as a tool for local, national and international projects that would enrich and even revise chapters of history. Conference presentations will be related, but not limited, to: Oral history throughout history Oral historian: a public historian? Oral history as a form of social and communal activity; Promoting oral history and engaging public awareness Conducting oral history research; advantages and disadvantages; limitations and ways to overcome them Archiving oral testimony; examples and presentation of valuable archives Methodologies, techniques and methods in conducting and writing oral history Theories of oral history Re-examining and re-writing history through the lens of oral history; Oral history in the global historical arena The absence of historical facts and the role of testimonies Epistemological and ethical dilemmas in oral history Use and abuse of oral history on the Internet Oral history and the law Cases in which oral testimony changed historical truth Oral history as a form of therapy Collective memory and oral tradition The role of individual memory in oral history Oral history as a revealing or misleading tool Manipulation of memory and the role of oral history Oral history and trauma Oral history in war Oral history in the hands of social scientists Oral history as a tool of revealing/reliving a dictatorship/suppressing regime Altering, exaggerating or forgetting memories; the psychology of a survivor Can individual and collective memory be manipulated in order to present a particular side of an incident? Iconic cases of oral history Why is the oral history project needed? Goals, steps and priorities Oral history in teaching and teaching oral history The conference will bring together scholars from different fields including history, philosophy, religion, sociology, international relations, literature, art, space studies, peace studies, cultural studies, minority studies, war and/or genocide studies, journalism, immigration studies, psychology and psychiatry, political and social studies, and those working in archives, museums and NGOs. We are particularly interested in inviting those with first-hand experiences, amateur archivists and memory collectors to participate in our newly established session “Share your memories and change history.” Submissions may propose various formats, including: *Individually submitted papers (organised into panels by the committee) * Panels (3-4 individual papers) * Roundtable discussions (led by one of the presenters) * Posters Paper proposals up to 250 words and a brief biographical note should be sent by 15 October 2019 to: [email protected]. Posting date: 07 August 2019 views | 1 subscribers | Be the first to rate this event Placement:
http://www.brownwalker.com/event/22548
AbstractThis dissertation addresses two basic questions: 1. Are people with highly superior autobiographical memory (HSAM) susceptible to memory distortions? 2. What is different about them that might offer clues that would help explain their ability? To answer the first question thoroughly, HSAM individuals and age match controls participated in a number of memory distortion tasks. In the DRM memory distortion word list paradigm we found that HSAM participants had comparably high rates of critical lure endorsement, indicating a vulnerability to false memories brought about by associations. They also participated in a classic misinformation experiment with photographic slides as the original event, and text narratives containing some pieces of misinformation. At the subsequent memory test HSAM individuals indicated more false memories than control participants, a finding that became non-significant when adjusting for individual differences in absorption. After a subsequent source test, HSAM and control participants had comparable numbers of false memories from misinformation. In semi-autobiographical memory distortion tasks, HSAM and control participants had fairly similar rates overall. For example, in a nonexistent news footage task using suggestion (also known as the “crashing memory” paradigm) 10% of HSAM individuals said they had seen the footage (a further 10% indicated maybe/unsure), whilst 18% of controls did (5% maybe; ns). A guided imagery task, with the same nonexistent footage as the target event, produced similarly increased rates of false report in HSAM (17% changed from “no” to “yes”) and control (10% from “no” to “yes”) participants. Memory for their emotions in the week after 9/11 was similarly inconsistent in HSAM and control participants. These results suggest that, relative to controls, HSAM individuals are as susceptible to both misinformation and reappraisals when the target events are semi-autobiographical. The second main research question asked what is different about HSAM individuals that might give us clues as to why they have their ability? To answer this we measured HSAM participants’ and age/gender matched controls’ on a number of behavioral measures to test three main hypotheses: imaginative absorption, emotional arousal, and sleep. HSAM participants were significantly higher than controls on two dispositions—absorption and fantasy proneness. These two dispositions were associated with a measure of HSAM ability within the superior memory participants. The emotional arousal hypothesis yielded only weak support. The sleep hypothesis was not supported in terms of quantity, but sleep quality may be a small factor worthy of further research. Other individual differences are also documented. Speculative pathways describing how absorption and fantasizing could lead to enhanced autobiographical memory are discussed.
https://researchportal.port.ac.uk/en/studentTheses/highly-superior-autobiographical-memory-hsam
(NaturalNews) Scientists with the Department of Psychology and Social Behavior at the University of California, Irvine, ventured into unexplored waters when their research unveiled a link between sleep deprivation and false memories. Led by Steven J. Frenda, the study found that sleeping five hours or less a night was linked to false memory formation, as reported by Medical News Today . While past research has examined the relationship between lack of sleep and memory loss, this experiment, published in the journal Psychological Science , suggested that sleep deprivation could increase one's susceptibility to false memories. "I was surprised to find that there were so few empirical studies connecting sleep deprivation with memory distortion in an eyewitness context," said Frenda. "The studies that do exist look mostly at sleep-deprived people's ability to accurately remember lists of words - not real people, places and events." Experiment asks over 100 college-age volunteers to review crime photos The research team developed a test in which they could investigate how getting no sleep at all affects the formation of false memories by studying a group of 104 college-age participants. The participants were divided into four groups. Two of the groups were shown photos of a crime taking place at a laboratory late at night. One of the groups was allowed to sleep, while the other stayed up all night. The other two groups, one slept while the other stayed awake, reviewed the crime photos the morning after rather than the night before. The participants were then required to read narratives of eyewitness statements that gave different information than what the photos showed. One example included an eyewitness stating that a thief put a stolen wallet in the pocket of his pants, when the photo showed that he placed it in his jacket. The volunteers were then asked to recall what was shown in the photos. Researchers found that the group who viewed the photos, read the narratives and attempted to recall the pictures after staying awake all night were more likely to say that the details in the eyewitness narratives were present in the photos, when in reality they weren't, an indication of false memory formation. Contrastingly, the groups that were allowed to sleep remembered what they saw in the pictures and were far less likely to report false memories. Getting enough sleep offers a better quality of life Sleep is incredibly important and crucial to maintaining overall health. Adequate sleep allows your brain to prepare for the next day by sharpening your learning and problem-solving abilities. A good night's rest also affects your physical health. Sleep helps your body heal and repair your heart and blood vessels, and maintain a healthy balance of hormones, in addition to supporting growth and development in children and teens. Sleep deprivation can quickly lead to a host of health complications including increased stress levels, weight gain and more serious illnesses like schizophrenia. Researchers previously believed that disrupted sleep was a symptom of schizophrenia, but experts are now suggesting that these disturbances could actually trigger the fatal hallucinogenic disease . The findings, published in the journal Neuron , examined the association between poor sleep and schizophrenia by measuring electrical activity in the brain during sleep. Led by researchers from the University of Bristol, the study's authors believe that prolonged sleep deprivation increases the occurrence of schizophrenia symptoms such as hallucinations, confusion and memory loss. "Decoupling of brain regions involved in memory formation and decision-making during wakefulness are already implicated in schizophrenia, but decoupling during sleep provides a new mechanistic explanation for the cognitive deficits observed in both the animal model and patients: sleep disturbances might be a cause, not just a consequence of schizophrenia," noted Dr. Matt Jones, the study's lead author. Experts admit that more research is needed; however, the information sheds light on new techniques for neurocognitive therapy in schizophrenia and other related psychiatric diseases. Additional sources:
https://www.naturalnews.com/z046319_sleep_deprivation_false_memories_schizophrenia.html
cultural trauma to analyze generational trajectories in identity formations. About the Author Thorsten Wilhelm is a doctoral student in the English Department at Heidelberg University and an Exchange Scholar at Yale University. He received his MA in History and English Literature and Linguistics from Heidelberg University. His research focuses on the ongoing effects of the Holocaust in Jewish-American fiction. He looks at how these intergenerational trauma narratives form identities and collective trauma. Apart from this, his interest in nineteenth-century literature has him working on bibliographical histories of engravings for the novels of Charles Dickens. Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp.2017.206 Generational Trauma Like other narratives, trauma narratives follow a process of “coding, weighting, and narrating,” 1 constituting a co(n)temporaneous past that exceeds the bare facts. The past becomes not just contemporaneous, but cotemporaneous in its experiential quality. For that, it is necessary to distinguish between stories that retell an individual trauma and cultural narratives that evolve from the continuous generational engagement with such traumas. Humans tell stories to interlink their identities with others and establish a contemporaneity: a contemporaneity that allows storytellers to approach as closely as possible a reality they may or may not have experienced themselves. 2 In this quest, the past is a vital source of—and a powerful force in—defining the present(s) and the future(s). 3 Contemporaneity is distinct for each generation: every generation, every individual seeks to create a unique version, which is invariably filled with their own pertinent questions for both present and future. One “generation feels keenly what another barely notices,” one “appreciates (or dreads)” what “another takes for granted.” 4 It is this drive for an understanding and need to experience something of an other’s past trauma that fuels the trauma narratives of those who have not lived through the Holocaust. Commemorative Narratives Related to this urge to re-present memory, history, and trauma is the attempt, in narrative, to build a whole self with a stable, independent identity. It is an attempt grounded temporally for both individual and collectivity. To achieve this goal, accounts of the (traumatic) past, present perceptions, and dreams and hopes for the future need to unify into a coherent narrative, making trauma narratives both a disruptive and a unifying factor for identity formation, which is why Ron Eyerman sees the role of memory in identity formation as the narrativized account of one’s past—the “part of the development of the self or personality.” 5 Memory, thus, is the key anchor in the tides of time, allowing us to 1 Jeffrey C. Alexander, “The Social Construction of Moral Universals,” in Remembering the Holocaust. A Debate, ed. Jeffrey C. Alexander (Oxford: Oxford University Press, 2009), 7. 2 Eugene Hollahan, “Saul Bellow: Vision and Revision by Daniel Fuchs; Saul Be llow and History by Judie Newman,” Studies in the Novel 20 (1988): 104. 3 Rachel M. Herweg, Die jüdische Mutter: Das verborgene Matriarchat (Darmstadt: Wissenschaftliche Buchgesellschaft, 1994), 6. 4 Hana Wirth-Nesher and Michael P. Kramer, “Introduction: Jewish American Literatures in the Making,” in The Cambridge Companion to Jewish American Literature, ed. Hana Wirth-Nesher and Michael P. Kramer (Cambridge: Cambridge University Press, 2003), 5. 5 Ron Eyerman, “Slavery and the Foundation of African American Identity,” in Cultural Trauma and Collective Identity, ed. Jeffrey C. Alexander et al. (Berkeley: Polity Press 2004), 64–65. Historical Contemporaneity and Contemporaneous Historicity Creation of Meaning and Identity in Postwar Trauma Narratives Thorsten Wilhelm 21 T h or s te n W i lh e lm Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp.2017.206 construct our individual and collective selves. 6 Remembering the past, then, is—as Proust describes in In Search of Lost Time—not an actualization of the past. In remembering, we do not recreate a Rankean past “as it actually was” but as we perceive it at the present moment: it is re-presented. 7 In remembering, one constructs a revivified narrative of a past in which the memory is formed by a present subjectivity. 8 This kind of remembrance, for those who have not experienced the Holocaust, is fraught with complications. Gary Weissman stresses the desire of children of Holocaust survivors “to become a prisoner, to actually feel the horror – in short, to witness the Holocaust as if one were there.” 9 This longing is not, as Weissman accurately puts it, “limited to those who are the children of Holocaust survivors,” but pertains to “many people who have no direct experience of the Holocaust but are deeply interested in studying, remembering, and memorializing it.” 10 As such, discrepancies among factual reality, survivor memory, and non-survivor imagination become complicated. “Finally, it was hearing stories […] at the very scene of the crime,” which allows those without a Holocaust experience “to come closest to something of the missing horror, however fleetingly.” 11 The closeness to and of the narrative is the means of connecting to a place and a time that otherwise could not be made co(n)temporary. Hearing the stories transforms hitherto historically significant places into those that are personally informed. Conceptually, only the survivors have a direct link to some aspects of the Holocaust. Their individual trauma is specific to their experiences and gives rise to individual stories and memories which, in and of themselves, create access points for the post-Holocaust generations who want to connect to the trauma of their ancestors. To achieve this connection, “the tendency to privilege and identify with those histories that resonate with one’s own sense of identity” is the vital touchstone to make the past trauma a contemporaneous identity. 12 The generations of children and grandchildren of Holocaust survivors implement their forebears’ memories into their own memories and identities. 13 By imagining the narratives they hear, or do not hear, they form memories of a possible story which are realized as actual memories of events they did not, in reality, live through. Nevertheless, the experiential quality is quite similar. Hearing their parents’ stories and 6 Jeffrey K. Olick, “Introduction,” in The Collective Memory Reader, ed. Jeffrey K. Olick, Vered Vinitzky-Seroussi, and Daniel Levy (New York: Oxford University Press, 2011), 37. 7 Christoph Münz, “Alles Was Ich Tun Kann Ist diese Geschichte zu Erzählen: Erinnerung und Gedächtnis im Judentum und Christentum,” in Die Gegenwart des Holocaust: “Erinnerung” als Religionspädagogische Herausforderung, ed. Michael Wermke (Münster: Lit, 1997), 73. 8 Alan Megill, “History, Memory, Identity.” In The Collective Memory Reader, ed. Jeffrey K. Olick, Vered Vinitzky-Seroussi, and Daniel Levy (New York: Oxford University Press, 2011), 196. 9 Gary Weissman, Fantasies of Witnessing: Postwar Efforts to Experience the Holocaust (Ithaca/London: Cornell University Press 2004), 4. 10 Weissman, Fantasies, 4. We remember also Lawrence Langer’s now-famous statement about the high standards and dangers of engaging scholarly with the Holocaust. 11 Weissman, Fantasies, 5. 12 Weissman, Fantasies, 7. 13 In the following, I use the term first generation for the generation of Holocaust survivors. The second generation are the children of the survivors. The third generation denotes the grandchildren of survivors. Although these distinctions are somet imes identical with literary or cultural generations, that is not always the case. 22 H i st o ri c a l Co n te m p o r a n ei t y a nd C o nt e mp o ra n eo u s H i s t o r ic i t y Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp .2017.206 imagining them—that means actualizing these narratives—in the present re-members 14 them within their pertaining contemporary identity. Memory, here, is not a remembrance of isolated incidents, but the construction of a meaningful narrative which allows for an inscription into others’ contemporaneous identities: the formation of a usable past, a livable present, and a wishable future. 15 Processing Trauma In this endeavor, writers become makers of meaning and architects of reality because their subject matter is the whole temporal continuum of which man tries to make sense by binding together past, present, and future. 16 Where does fact end and fiction begin? How factitious is fiction? How fictitious is fact? I inquire into the narrative collectivization of a historical event over three generations of literary engagement to analyze how, to borrow Jeffrey Alexander’s phrase, “a specific and situated historical event, an event marked by ethnic and racial hatred, violence, and war, becomes transformed into a generalized symbol of human suffering and moral evil.” 17 I will follow the theory of cultural trauma developed by Alexander, who sketches the history of the perception of the Shoah from Nazism-related war “atrocities” via a progressive narrative to a tragic narrative that went hand in hand with inaugurating the Holocaust’s status as unique and universal. 18 This development is essential in understanding the emergence of the Holocaust as an event of cultural trauma that is very much alive for later generations whose predecessors, although being closer to it, felt no or little connection to. 19 This phenomenon is in addition and contrast to what Marianne Hirsch called “postmemory,” 20 a form of nonmemory or absent memory of the later generations that spawned a varied and 14 Remembering here takes on two meanings: first, to bring them back to mind and hand them down from generation to generation and, second, to inscribe themselves, through this memorial act, as members of the collectivity of the traumatized. 15 Paul Connerton, How Societies Remember (Cambridge/New York: Cambridge University Press, 1989), 26. 16 See, in this context, the Heraclitian sentiment that “[e]verything changes and nothing remains still. You cannot step twice into the same strea m” (Rapp 2007, 67). According to Heraclitian philosophy, history is a constant temporal stream, which is never the same at two different points of time. See also, the Augustinian doctrine that “if the present time were always present and weren’t blending into the past, it wouldn’t be time anymore but eternity” (Me ijering, 1979, 59). Time is seen as a wh ole that cannot be put into human histo-temporal categories, but is instead an auxiliary construction to grasp reality (Harpham 1985, 81; Stein 1984, 7). It follows, that “beliefs in a historical past from which men might learn any simple, substantial truth” are false, as “there were as many ‘truths’ about the past as there were individual perspectives on it” (White 1973, 332). For Nietzsche (2010, 42), “[t]he unrestrained historical sense, pushed to its logical extreme, uproots the future, because it destroys illusions and robs existing things of the only atmosphere in which they can live” (Grass 1983, 128–29; Anchor 1987, 121–22). 17 Alexander, “Social Construction,” 3. 18 Alexander, “Social Construction,” 3–32. 19 Alexander, “Social Construction,” 3. 20 Marianne Hirsch, The Generation of Postmemory: Writing and Visual Culture after the Holocaust (New York: Columbia University Press, 2012). 23 T h or s te n W i lh e lm Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp.2017.206 complex literary output in the attempt to approach the trauma. 21 This “postmemory” deserves further analysis vis-à-vis Alexander’s theory. Whereas Alexander “explores the social creation of a cultural fact and the effects of this cultural fact on social and moral life,” 22 I explore the literary narratives produced both as cause and effect of this social construction and its metamorphoses over the generations. The “canon” 23 of Jewish American fiction establishes dialogues between individuals and groups and, thus, allows for interrelated and/or opposing representations of identity. As part of the literary tradition, the narratives form horizons on which we see how individuals and collectivities inscribe their identities as created by inclusion and exclusion of aspects of the collective identity. Individually, those who lived through the Holocaust are traumatized by the atrocities they experienced. Collectively, this trauma was transmitted to the following generations who are traumatized by its aftermath. The survivors’ tales foster a tradition of Holocaust narratives that perpetuates the trauma, transmitting it to the following generations—thereby engaging in a certain healing process. But these tales also constitute a connective collective experience 24 through the need of later generations to incorporate the traumatic contemporaneity into their own contemporaneity, that is the identities they narrate both of and for themselves. First-generation testimonies, especially Elie Wiesel’s Night, use narrative stylistic features to draw the reader into, and stir engagement with, the trauma stories by simultaneously propagating the incomprehensibility and incommunicability of the Holocaust. These individual renditions establish a collective narrative of the trauma that traumatizes even later generations. This can be seen, Alexander argues, “when members of a collectivity feel they have been subjected to a horrendous event that leaves indelible marks upon their group consciousness, making their memories forever and changing their future identity in fundamental and irrevocable ways.” 25 He asserts that a cultural trauma is not a priori in the world, that is events, on the cultural level, are not inherently traumatic. 26 Rather, what creates a cultural trauma are the individual and collective attributions to an event. 27 21 Ellen S. Fine, “Intergenerational Memories: Hidden Children and Second Generation,” in Remembering for the Future: The Holocaust in an Age of Genocide, vol. 3, Memory, ed. John K. Roth and Elizabeth Maxwell (Basingstoke: MacMillan, 2010), 187. 22 Alexander, “Social Construction,” 3. 23 I will not attempt to define a comprehensive and exhaustive canon of Jewish American fiction—let alone literature. This effort has been undertaken and is currently being worked on by eminent scholars such as Ruth R. Wisse and Justin Cammy, to name but two. Following Jan Assmann, I will merely use my individual “canon” as a principle of the interrelation of collective and individual identity formation through and in works of fiction. I hold that the works I analyze describe, form, change, and discard the normative consciousness of a whole population and the individuals who relate their identities to it (Cammy 2008; Wisse 2000). 24 See Joseph Soloveitchik’s “covenant of fate” in denoting such a formation of a collective identity along the narratives about the Holocaust trauma (Kaplan 2005, 5). 25 Jeffrey C. Alexander, “Toward a Theory of Cultural Trauma,” in Cultural Trauma and Collective Identity. ed. Jeffrey C. Alexander et al. (Berkeley: University of California Press, 2004), 1. 26 Jeffrey C. Alexander, Trauma. A Social Theory (Cambridge, MA: Polity. 2012), 13. 27 Alexander, “Toward a Theory,” 8. 24 H i st o ri c a l Co n te m p o r a n ei t y a nd C o nt e mp o ra n eo u s H i s t o r ic i t y Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp .2017.206 But trauma leaves ruptures “in the web of meaning,” thereby constricting the construction of “meaningful histories.” 28 That is why fictional traumas are sometimes no less traumatizing than factual ones. Trauma “becomes a thing by virtue of the context in which it is implanted.” 29 As such, trauma generates traumatic narratives that must be remembered in order to be worked through and spoken out. 30 Individual and collective traumatic memories continue through narratives, which, again, change over time as new memories are found or formed in the construction of coherent narratives by victims and following generations. Accordingly, the original trauma, while being worked through, is re-enacted by subsequent generations. Generationally, our present and past involvements are causally connected. On one hand, our perception of the past depends on present influences. On the other, the past influences our perception of the present. 31 Trauma narratives, hence, play an important role in studying post-Holocaust Jewish-American fiction, as highlighted by Yael Zerubavel, who contends that [e]ach act of commemoration produces a commemorative narrative, a story about a particular past that accounts for this ritualized remembrance and provides a moral message for the group members. […] collective memory clearly draws on historical sources. Yet it does so selectively and creatively. […] the commemorative narrative […] undergoes the process of narrativization. 32 Post-1945 Jewish-American fiction, in many respects, constitutes such “commemorative narratives.” In Alexander’s theory, trauma simultaneously constitutes a disruptive and a constructive force in identity formation. 33 Remembering becomes an act of putting memories into meaningful narrative sequences that invest one’s present with meaning, irrespective of whether the remembered entities are based on fact. There cannot, of course, be a single adequate and accurate memory; rather, there is a multiplicity of memories. 34 This highlights the basic principle: while “survivors persist in writing memoirs to bear witness to their encounter with death. […] Children of survivors are trying to come to terms with the wounds they have inherited.” 35 In this process, imaginative narratives are paramount because they create meaningful stories that allow for coherence and grounding by filling gaps either fictionally or by incorporating other people’s memories. Necessarily, this process, to a lesser extent, pertains also to survivor testimonies, since no 28 Bernhard Giesen, “The Trauma of Perpetrators: The Holocaust as the Traumatic Reference of German National Identity,” in Cultural Trauma and Collective Identity. ed. Jeffrey C. Alexander et al. (Berkley: University of California Press, 2004), 113. 29 Neil J. Smelser, “Psychological Trauma and Cultural Trauma,” in Cultural Trauma and Collective Identity, ed. Jeffrey C. Alexander et al. (Berkeley: University of California Press, 2004), 34. 30 Giesen, “Trauma of Perpetrators,” 113. 31 Connerton, How Societies, 2. 32 Yael Zerubavel, Recovered Roots: Collective Memory and the Making of Israeli National Tradition (Chicago: The University of Chicago Press, 1995), 237. 33 Cf. the Holocaust trauma and second-generation identification, not with the following generations’ own present reality but with their parents’ memories. It shows the need to incorporate the traumatic memories, which constitute a connective collective experience to interlink one’s identity with others. 34 Fine, “Intergenerational Memories,” 78. 35 Fine, “Intergenerational Memories,” 78. 25 T h or s te n W i lh e lm Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp.2017.206 survivor can represent the Holocaust or even one concentration camp experience in its entirety, 36 which is why works engaging with the tradition of Holocaust testimonies can be seen as “works born of belated trauma.” 37 There is not one Holocaust trauma, but a range of traumata born from the specifity of one’s experiences at a certain time and location within the event. Each trauma is specific to the context in which it occurs. Taken together as an evolved and evolving collective narrative, the respective experiences and traumata form a story of a collective trauma. At the same time, the Holocaust trauma constitutes several collective identities that must be differentiated: the identity of survivors, of the second and third generations, and of all those living in the aftermath of the Holocaust. 38 Each identity is marked by specific scripts, which the individual has and which she follows to ground herself. To analyze the use, function, and implications of these scripts, one must analyze the intergenerational “contingent, sociologically freighted nature of the trauma process[es].” 39 Such an intergenerational approach to the literary productions is significant because the effects of the cultural trauma have not yet been studied in depth. An analysis based on Alexander’s theory shows that a good many studies of Jewish-American fiction fail to see that there is an interpretive grid through which all “facts” about [the Holocaust] trauma are mediated … [It] has a supraindividual, cultural status; it is symbolically structured and sociologically determined. No trauma interprets itself. Before trauma can be experienced at the collective (not individual) level, there are essential questions that must be answered, and answers to these questions [and the questions themselves] change over time. 40 These changes are overtly and covertly reflected in the literary outputs of each generation. When we are interpreting the world and our perceptions of it, we do so by constructing texts that reflect our rootedness in our respective contemporaneities—social, historical, communal, regional, geographical, religious, temporal, and so forth. We should not forget that, although the Holocaust traumatized each survivor, spawning a plethora of narrative and creative engagements, Jewish-American fiction is more than a mere literature of the Holocaust. Third-generation works such as All Other Nights, in which Dara Horn explores another kind of Jewish trauma experience during the American Civil War of 1861–1865, and What We Talk About When We Talk About Anne Frank, in which Nathan Englander explores a wider range of questions about Jewish identity in a post- 36 Weissman, Fantasies, 1–5. 37 This is a form of individual, i.e., “psychological,” trauma and not one of cultural trauma as Alexander conceptualizes it. These two concepts, while compared, should not be intermixed or blurred. 38 Marita Grimwood, in her study Holocaust Literature of the Second Generation (New York: Palgrave Macmillan, 2007) underlines the inherent transnationality of second-generation literature—a fact easily expandable to both first- and third-generation writing as well. According to Grimwood, “[s]econd- generation writing is by its nature an international field. While not wishing to elide the cultural specificities of immigrant experiences and memorial traditions, it is clear that in this case drawing rigid boundaries between national literatures is both arbitrary and limiting. Owing to the nature of postwar migration, even pinning a given writer down to a single country can be difficult” (2). In the following, however, we contend that analyzing works written from within a specific cultural context–the United States–nevertheless accounts for certain features pertaining and originating specifically from this context. 39 Alexander, “Social Construction,” 7. 40 Alexander, “Social Construction,” 7. 26 H i st o ri c a l Co n te m p o r a n ei t y a nd C o nt e mp o ra n eo u s H i s t o r ic i t y Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp .2017.206 Holocaust world, producing particularly illuminating narratives in looking beyond the trauma by building upon it. First-Generation Testimony The 1970s saw the beginning of a tradition of sharing testimony of one’s experiences during the Holocaust with a more widely interested audience. Before that, stories of the traumatic experiences were not welcomed by the larger public. Perceptions of the survivor also changed. 41 Ellen S. Fine’s statement that “the afterlife of the Holocaust […] has expanded in the 1990s” reflects this growth of testimonial material. 42 Direct testimony fostered the emergence of survivors as a new concept in the trauma discourse because they provided a “tactile link with the tragic event. As their social and personal role was defined, they began to write books, give speeches to local and national communities, and record their memories of camp experiences on tape and video.” 43 The trajectory of silence and testimony can be seen in the example of Elie Wiesel, who at first imposed a pledge of silence upon himself: I knew that the role of the survivor was to testify. Only I did not know how. I lacked experience, I lacked a framework. I mistrusted the tools, the procedures. Should one say it all or hold it all back? Should one shout or whisper? […] And then, how can one be sure that the words, once uttered, will not betray, distort the message they bear? […] So heavy was my anguish that I made a vow: not to speak, not to touch upon the essential for at least ten years. Long enough to see clearly. Long enough to listen to the voices crying inside my own. Long enough to regain possession of my memory. Long enough to unite the languages of man with the silence of the dead. 44 41 The survivors’ testimonies emblematize the problems that arise on expressing and working through the traumatic events. Lived-through atrocities or witnessed atrocities together form the trauma that the survivors need to work through. Temporal, geographical, and, most importantly, linguistic distance is needed to be able to construct a meaningful narrative. Although clearly based on fact and actual events, each testimony is also a novel or, to use Claire Colebrook’s (1997, 16) expression, a “discursive event.” Hence, it incorporates historical facts into “a collection of new relations,” filling up gaps and establishing links to foster coherence. The consecutive literary traditions, then, are clearly and unavoidably works of fiction. The factual element in second- and third-generation literary productions is based on “historical evidence” and the accounts of “those who were there.” The fictional element is that they imaginatively connect to the factual events trying to grasp, or even experience, the trauma they have inherited as parts of their past. In connecting to their parents’ trauma, the second generation faces elemental difficulties. For one, they do not have actual memories of the trauma they have inherited from their parents’ stories and physical and psychological suffering. Also, their parents’ actual memories— emotional and bodily reactions—cannot be transformed into their own memories one to one. Nevertheless, their imaginative effort—as realized in their works—allows them to approximate their parents’ memories in their “affective and psychic effect” that Marianne H irsch (2012, 31) terms “postmemory.” The concept of postmemory is based on the premise that, for the children, who actually had to cope, if not with the trauma itself, then with its aftereffects, the Holocaust is “a memory not of theoretical abstraction or ideological strategies, but of proximity charged with feeling” (Hoffman 2004, 180). 42 Fine, “Intergenerational Memories,” 51. 43 Alexander, “Social Construction,” 66. 44 Elie Wiesel, A Jew Today (New York: Vintage, 1978), 18. 27 T h or s te n W i lh e lm Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp.2017.206 Wiesel’s Night and the implications of his “vow” deserve reevaluation considering intergenerational trauma narratives, especially since Wiesel claims that his work is born from silence. “I entered literature through silence,” Wiesel states in an interview, I seek the role of witness, and I am duty bound to justify each moment of my life as a survivor. […] Words can never express the inexpressible; language is finally inadequate, but we do know of the beauty of literature. We must give truth a name, force man to look. The fear that man will forget, that I will forget, that is my obsession. Literature is the presence of the absence. Since I live, I must be faithful to the memory. […] I must be the emissary of the dead, even though the role is painful. If we study to forget, live to die, then why? The question is the answer; what I do, what I write, is the answer. I write to understand as much as to be understood. Literature is an act of conscience. It is up to us to rebuild with memories, with ruins, with efforts, and with moments of grace. 45 Caruth thought-provokingly approaches the theoretical stance of the unknowability, incomprehensibility, inexplicability, and inexpressibility of the Holocaust and its ensuing traumata with an interest “in the complex relation between knowing and not knowing,” because it is at this intersection “that the language of literature and the psychoanalytic theory of traumatic experience precisely meet.” 46 The nexus where knowing, not-knowing, and the emerging narratives to imbue meaning in the events through language meet is vital in this context. Contrary to Caruth’s theorem of the incomprehensibility of the initial trauma experience, however, Richard McNally contends that the trauma is available to the victim initially and fully. In the cases where there is a silence, that silence is either voluntarily or involuntarily imposed on the victims as a result of personal needs or the public audiences’ reaction to the trauma narratives. 47 This is not a form of repression, since the memory of trauma remains virulent in the victim, but a form of not talking about it. In the narrative quest for understanding and meaning creation (as highlighted in Alexander’s theory of a coded, weighted, and narrated collective narrative in the face of destroyed meaning), the trauma narratives make the initial, individual experiences and memories of the trauma available to those who have not experienced it or who have experienced it differently. There, in the open space of public and literary discourse, is the “truth, in its delayed appearance and its belated address” merged with “what is known, but also [with] what remains unknown in our very actions and our language.” 48 Moreover, the collective narrative starts to haunt those who have not experienced the horrors themselves but are drawn into the victims’ traumatic narratives. Wiesel published the Yiddish …Un die velt hot geshvign with significant differences to the later Night. All English versions of Night are translations of the French version. Most importantly, Wiesel’s work allows for an analysis of fictional versus historical truth and of how the narrativization of memory shapes the perception of the Holocaust. Apart from that, 45 Heidi Anne Walker/Elie Wiesel, “How and Why I Write: An Interview with Elie Wiesel,” The Journal of Education 189, No. 3, REFLECTION and RENEWAL (2008/2009): 49–50. 46 Cathy Caruth, Unclaimed Experience: Trauma, Narrative, and History (Baltimore , MD: The Johns Hopkins University Press 1996), 3. 47 Richard J. McNally, Remembering Trauma (Cambridge, MA: The Belknap Press of Harvard University Press 2003). 48 Caruth, Unclaimed Experience, 4. 28 H i st o ri c a l Co n te m p o r a n ei t y a nd C o nt e mp o ra n eo u s H i s t o r ic i t y Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp .2017.206 Night lies in the middle of a significant transitional period in the social construction of the Holocaust. Not only is Wiesel’s account located at a point in time when communal perception of the Holocaust began to change from a progressive to a tragic narrative, but also, he helped in making that transition. Night, as a paradigmatic narrative, has had a strong impact on subsequent discourses about Holocaust trauma and how it has been narrativized. Wiesel artistically and poetically recounts the unspeakable events and uses historical and fictional reappraisal to bridge the void of silence and memory loss through testimony and narrative. Thereby Wiesel powerfully shows that the inexplicability and unspeakability is no hindrance to speak of it and to find linguistic expressions to approach the subject. As Naomi Seidman emphasizes, Wiesel’s—and trauma discourses in general—stylization of the unsayable is not what cannot be expressed at all but “what cannot be spoken in French.” 49 Wiesel’s use of narratological techniques to tell his experiences, to relate to his experiences, and to mediate these experiences, as well as the way those techniques establish or reestablish a certain degree of identity after the trauma, help understand the literary productions of the following generations. These productions are, in themselves, new building blocks for new identities. Seidman, in her comparative analysis of Night and …Un die Velt holds the different endings to be two “entirely different account[s] of the experience of the survivor.” 50 She sees the ending of Night to be a projection of Eliezer, the surviving protagonist, into the post-Holocaust world in which the witness becomes a torn mediator between the need to speak and the silence and death born by an unspeakable event. Seidman conceptualizes this as a change in the witness’s identity, arguing that …Un die Velt portrays the enraged “Yiddish survivor [who] shatters that image as soon as he sees it, destroying the deathly existence the Nazis willed on him.” This depiction would fit into the context of what Alexander calls the progressive narrative, an enraged but teleologically oriented outlook that underlines the need to shatter the trauma for life to go on. In Night, on the other hand, the narrative projects “liberated Eliezer’s death-haunted face into the postwar years when Wiesel would become a familiar figure,” and when the outlook had changed to a tragic narrative, which constituted the survivor as an eternally maimed figure who needs to be held in awe because of the message delivered by the witness of trauma. 51 An interesting counterpoint is Louis Begley’s 1991 novel Wartime Lies. Begley, who spent the war years hiding in Poland with his mother under a fake identity, will—although not a survivor of a concentration camp—be dealt with as a witness of the traumatic events of the Holocaust. Begley, in contrast to Wiesel, never claims that his account is mnemonically or historically accurate (he writes, after all, nearly fifty years after the events). On the contrary, he offers the possibility of fictional freedom and narrative contortion. Begley claims “the freedom to invent, consistent with the profound moral and psychological truth of the story.” 52 He holds that “the passage of time and exile” functions as “a psychic screen” that helped him grapple with the topic. 53 Like Wiesel, he needed temporal distance. 49 Naomi Seidman, “Elie Wiesel and the Scandal of Jewish Rage,” Jewish Social Studies: New Series, 3 (1996): 8. 50 Seidman, “Scandal,” 7. 51 Seidman, “Scandal,” 7. 52 Louis Begley, Wartime Lies (New York: Ballantine Books, 2004), 201. 53 Begley, Wartime Lies, 201. 29 T h or s te n W i lh e lm Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp.2017.206 However, in contrast to Wiesel, who stresses mnemonic and historical accuracy, Begley emphasizes that Wartime Lies represents one possible memory and possible identity. This does not necessarily make his account a lie, but allows for a deeper understanding of the process of fictionalizations. Both Wiesel and Begley stress that temporal distance is crucial in understanding and narrating the experiences, but Begley writes that Wartime Lies is “quintessentially a work of fiction and not an autobiography or memoir, and that I had to write the story […] in the form of a novel. The form was no less necessary than the emotional distance from the events I was going to evoke conferred by exile and the passage of time.” 54 A valuable excursus here is the Holocaust hoax Fragments. Memories of a Childhood, 1939-1948 by Binjamin Wilkomirski alias Bruno Doesekker alias Bruno Grosjean, because it unfolds questions about the possibility of such strong identification with the Holocaust that one creates one’s own account of it. 55 To be more precise, Wilkomirski identifies so deeply with the ex post facto collective narratives as to make them his new, lived identity. Fragments provides an interesting counterpoint to the survivor testimonies and bridges the gap not only to authors like Saul Bellow, 56 Philip Roth, 57 and Cynthia Ozick, 58 who are contemporaries of the Holocaust, although spatially and experientially removed from the events, but also to second- and third-generation writers. I do not intend to qualify one account as more genuine or less truthful than another. It is much more fruitful to analyze the implications that arise with this account and its reception—the need to identify with a traumatic event and a certain narrative about it, as well as the need to establish one’s own account of a memory that one has without it being one’s own memory. It is a striking example of how a collective trauma can be used to inscribe one’s identity. In a poetic narrative mode, the novels engage with the individual’s universal problems of contemporaneity in a present infused with traumatic narratives. Tradition, community, and culture are paramount. In the historical nexus, the individuals need to inscribe themselves into their respective version of history by creating meaning from the past. A whole self and a livable present is possible only by bringing past, present, and future into co(n)temporaneity. Second Generation The second generation’s literary output is marked by the conflict arising from the inherited parental trauma, which they feel as if it were their own but of which they have no personal memory. They must necessarily overcome past traumata: not only the Holocaust, but also forgetfulness, fragmentation, their personal fears, and unrealized hopes. Living in the present cannot be achieved if the individual is overwhelmed by the past. The protagonists in second-generation novels—mainly members of the second-generation themselves—yearn for a present wholeness of self and a stable identity. The parents’ trauma poses a tremendous legacy that counteracts this yearning. Not only do the children bear witness to the trauma, but they also must “never forget!” so that later generations will know what happened and 54 Begley, Wartime Lies, 200. 55 Binjamin Wilkomirski, Fragments: Memories of a Childhood, 1939-1948 (London: Picador, 1997). 56 Saul Bellow, Mr. Sammler’s Planet (New York: Penguin, 1995). 57 Philip Roth, The Ghost Writer (New York: The Library of America, 2007). 58 Cynthia Ozick, The Shawl (New York: Vintage, 1990). 30 H i st o ri c a l Co n te m p o r a n ei t y a nd C o nt e mp o ra n eo u s H i s t o r ic i t y Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp .2017.206 that the trauma may be, at some point, healed. Second-generation accounts are marked by features like parental silence as to their experiences, parental stories of the horrors without hope or contextualization, witnessing the parents’ futile attempts to start over again, and the self-imposed role as witness to the results of atrocities as well as to their parents’ survival. All of this results in a feeling of helplessness, rage, and near incapability of establishing a stable and independent identity in the face of this legacy. While the second generation bears the brunt of the ongoing trauma of the Holocaust, it is an indispensable link in working through the collective trauma. As proxy to the trauma, the second generation is overwhelmed by the individual and collective obligations to create a future. In doing so, the second generation is torn in the conflict between its inheritance and its own selves. Thane Rosenbaum’s Second Hand Smoke novelizes such helplessness, rage, and difficulties a child of survivors faces. Trained by his mother to be a Jewish nemesis who pursues all anti-Semites and finally fights back, Duncan Katz grows up as “a child of trauma. Not of love, or happiness, or exceptional wealth. Just trauma. And a nightmare, too.” 59 Significantly, his parents’ silence about their Holocaust past fills Duncan with fantasies about the events that allow him “to encounter them” imaginatively in the hope to feel at least something, “to be swallowed up […], to become a prisoner, to actually feel the horror” his parents felt and thereby break the excruciating tension he experiences between their Holocaust suffering and his seemingly comfortable post-Holocaust world. 60 Duncan is a witness to “the damage that could never be undone. The true legacy of the Shoah. Lives that were supposed to start all over but couldn’t. Halting first steps, then the stumbles. The inexhaustible sorrow of the parents; the imminent recognition of the children.” 61 For Duncan, children of survivors are “[c]hildren of smoke and skeletons,” so that, inadvertently, the “Holocaust shaped those who were survivors of survivors. Inexorably, cruelly, and unfairly so.” 62 The development of the narrative is further crafted around the dilemma of survivors who lost children during the Holocaust and had children again. They face the problem of whom to love and whom to care for when their post-Holocaust children are mere surrogates for the lost ones. Duncan, born after the Holocaust, lacks an identity without the possibility of a memory separate from his parents’ trauma. He is indoctrinated by his survivor mother to “avenge our deaths.” 63 The “ghosts of a robbed childhood” roam this novel of rage, hopelessness, and remembering without the ability to relive the real events. 64 Yet, he revisits Auschwitz on his trip to Poland, where he wants to find the brother he had never known he had until an uncle reveals Duncan’s mother’s darkest secret. In Birkenau, accompanied by his Yogi brother, Duncan experiences a terrifying catharsis that enables him to come to terms with the imagined past and gnawing rage. 59 Thane Rosenbaum, Second Hand Smoke: A Novel (New York: St. Martin’s Griffin, 2000), 1. 60 Weissman, Fantasies, 1. 61 Rosenbaum, Second Hand, 2. 62 Rosenbaum, Second Hand, 2. 63 Rosenbaum, Second Hand, 32. 64 Rosenbaum, Second Hand, 19. 31 T h or s te n W i lh e lm Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp.2017.206 It is this possibility of cathartic moments, by linking first- and second-generation trauma narratives, that constitutes the importance of this generation. Duncan, raised and trained by his traumatized mother “to be alone, to do without,” faces his greatest fears in the barracks of Birkenau. He is “paralyzed by the fear of being abandoned. Even my insides know it. My intestines are strangling each other.” 65 Duncan, the Nazi hunter, bodybuilder, and martial artist, who fulfills his mother’s dreams of an unquenchable flame against anti-Semitic atrocities, is incapable of crushing the fantasies of being selected, incarcerated, and tortured by the Nazis that surface from his (post-)memories. These memories are fantasies born from countless testimonials he read and that constitute the overall collective narrative of the Holocaust from which the post-generations draw their knowledge. Duncan needs his brother, Isaac, who, though born after the Holocaust in a DP camp, is held to be a death-camp survivor by his Polish neighbors and works as a yoga teacher in Warsaw, to make sense of these fears and come to terms with past and present traumata. “Your parents died too soon, and then your wife left you and took your child,” Isaac reminds Duncan. “Your stomach is not wrong; you just don’t know how to live with the grief.” 66 Isaac highlights the close connectedness between the past trauma and the present grief. Duncan must separate the two to establish a contemporaneity instead of his state of a past overwriting his present. Duncan, Isaac points out, should acknowledge that his mother’s experiences were of a different nature because she experienced the death camps. Isaac explains to his collapsing brother that he “can’t live a normal life with images of the Holocaust playing in [his] head” by showing Duncan the numbers that their mother branded on his forearm: “I am not afraid of them,” Isaac says. “You have no numbers, and yet they terrify you. You are hiding from yourself. You are a stranger to yourself. We are locked in this barracks, but you are trapped in yourself even tighter. You buried yourself alive in your own tomb.” 67 Duncan’s problems originate in his struggle to live an identity that is not his but is created by the narratives he forms from his parents’ silence and trauma, as well as the seemingly infinite number of Holocaust books cluttering his apartment. At first, Duncan only realizes that he is caught in a time warp, trapped in a cattle car. Everything is about loss. It feels like there is no difference between my life and what happened to our family during the war. […] My life is like one big atonement. Everything is Kaddish. Kristallnacht all over again, but this time the glass is not from broken storefronts, but families. 68 Duncan emblematizes the second-generation paradigm. He is excruciatingly aware of the difference between then and now, as indicated by his choice of words like “dress rehearsal,” “time warp,” and “what happened during the war.” In fact, it is especially this awareness of a seemingly unbridgeable temporal gap that makes it so unbearable for him. At the same time, he feels the need to pass on the trauma and, in an everlasting process of mourning, violates the very principles of the Kaddish because for him “[t]o mourn is to forget.” 69 Duncan 65 Rosenbaum, Second Hand, 262. 66 Rosenbaum, Second Hand, 262. 67 Rosenbaum, Second Hand, 262. 68 Rosenbaum, Second Hand, 262–63. 69 Rosenbaum, Second Hand, 264. 32 H i st o ri c a l Co n te m p o r a n ei t y a nd C o nt e mp o ra n eo u s H i s t o r ic i t y Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp .2017.206 transforms the second-generation burden, that is, the inability to feel what his parents’ felt during the Holocaust, into his “birthright,” which constitutes “a permanent scar.” 70 To heal this scar, Duncan needs his brother, who symbolizes the connection to the actual trauma and helps Duncan see that he cannot continue to make his parents’ past trauma his own contemporaneous identity. Isaac tells Duncan that “rage is all about holding on to something that you don’t need but are afraid to let go. You have a life force inside you. It is time to use it for living, and not as a prison.” 71 After living through his own personal— but imagined—Auschwitz nightmare, Duncan is finally able to heed this advice. He establishes his own narrative, in which he incorporates the narrative of his parents’ past but which no longer overwrites his own present identity. Third Generation Focusing on continuing the legacy, second-generation literature faces the dilemma that it can only approach the trauma, memory, and history imaginatively. To come to terms with the trauma, second-generation writers incorporate historical (factual) accounts into their fictional narratives. With direct contact with survivors and witnesses quickly becoming less possible, such incorporation and transmission become even more pressing for the third generation. Unlike the children, the survivors’ grandchildren have more distance to the actual accounts. However, scholarship suggests that many survivors find it easier to recount their experiences to—or to revisit the Old Country accompanied by—their grandchildren instead of their children. Compared to the third generation’s curiosity about their ancestors’ past, distance and closeness produce a plethora of new ways of engaging with the trauma. What is more, members of the third generation are more deeply ingrained in the American collective identity and are more likely to engage with the Holocaust in their individual way. Dara Horn’s The World to Come highlights the difficulties the third generation faces: the need to bear witness to memories and traumata of the Holocaust in a world where true witnesses are dying out fast. However, The World to Come emphasizes that such a world to come is not as bleak as it seems. Even in the face of a history full of trauma, pogroms, and atrocities, the past is viewed as an inalienable part of a person’s contemporaneity. “I believe,” says Ben, the protagonist of The World to Come, “that when people die, they go to the same places as all the people who haven’t yet been born. That’s why it’s called the world to come, because that’s where they make the new souls for the future.” 72 One’s predecessors are bound to one’s present identity because one is formed by their actions, decisions, choices, and lives. Simultaneously, the past intricately shapes and influences present and future, which is then bound back to a new generation arriving. Past, present, and future are inextricably linked in Horn’s narrative conception of the world to come. The World to Come is filled with these intricate relapses into a past before the Holocaust, a past where trauma already existed in the cloth of narrative. But for Horn, this history full of memory and loss is precisely what constitutes the fascination with it. One character ties “himself up in ropes of memory, caged himself in with iron bars of memory, drew the 70 Rosenbaum, Second Hand, 264. 71 Rosenbaum, Second Hand, 264. 72 Dara Horn, The World to Come. A Novel, (New York: Norton 2006), 124. 33 T h or s te n W i lh e lm Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp.2017.206 curtains and hid himself in the dark tomb which he filled with an entire world of memory – until all that was missing was color and light.” 73 It is a powerful play on the ambivalence between the actual memory and the belated writing. The characters in The World to Come are torn between preservation (the conservation of knowledge from all previous generations that is taught to the unborn new generations in the world to come) and destruction (the vital forgetting of being born into one’s present), liberation (the possibility to start anew), and enslavement (the generational baggage that comes with the past). In that, they mirror the difficulties of a generation that can approach the trauma increasingly only through secondhand narratives. These stories of the past fill one’s present because the deceased “give them all the raw material of their souls, like their talents and their brains and their potential.” 74 There still is hope because “it’s up to the new ones, once they’re born, what they’ll use and what they won’t, but that’s what everyone who dies is doing, I think. They get to decide what kind of people the new ones might be able to become.” 75 In this respect, Horn’s work is typical of third-generation writing, as it powerfully shows that the third generation is also characterized by an urge to move on from, instead of merely continue, the trauma. The worlds of the past, present, and future are continuously shifting in and out of co(n)temporaneity. The world to come becomes the world before: the Yiddish world of Eastern Europe and its plethora of narrative possibilities becomes a foil for the present dreams for the future. Horn and other third-generation writers acknowledge the Holocaust trauma but try to connect to this pre-Holocaust world to create a new life in the post-Holocaust one. The world to come—a future reference in itself—is where present and future merge with the past through predecessorial instruction. In this process, imaginative narratives are paramount because they create meaningful stories that allow for coherence and grounding by filling gaps either fictionally or by incorporating memories of other people. Future Contemporaneity The selected novels deploy a historicization of fiction and a fictionalization of history. Memory and history—as fact, as well as a fictional narrative—are used to produce meaning, to retell the past(s) to create, shape, or meet the present. The novels are bound by their need to bear witness to a traumatic past that entered the fabric of individual and collective identity, although, for the nonwitness generations, it is a past they can never experience themselves. Both first-generation writers such as Begley and second- and third-generation writers like Rosenbaum and Horn constitute a conception of the past as being one possibility among others. Survivors’ identities are shaped by the co(n)temporaneity of memories of a trauma that shattered their identities and their past up to the Holocaust. For the following generations, a considerable part of their identity is constituted by the co(n)temporaneity of a past trauma they have not experienced first-hand but, as a collective trauma narrative, overwhelms them nevertheless. If the process of remembering and forgetting is disrupted, both survivors and following generations will suffer a loss of identity—a footing in their present—as the collective trauma narratives take over. The past, in this conception, is a histonarrativistic nexus, that is constituted by individual and collective memories, facts and narratives, toward which the individual feels an urge to connect. While for Wiesel, the 73 Horn, World to Come, 200. 74 Horn, World to Come, 124. 75 Horn, World to Come, 124. 34 H i st o ri c a l Co n te m p o r a n ei t y a nd C o nt e mp o ra n eo u s H i s t o r ic i t y Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp .2017.206 Auschwitz self, although shattered metaphorically as an image in a mirror, will always remain a present reality, Duncan Katz, in Second Hand Smoke, needs to develop an identity that is his alone and not a reenactment of other people’s memories. In The World to Come, what Alexander calls a progressive narrative, is more palpable again. The co(n)temporaneous identity consists of the past and moves into a future, which is causally connected to a past of following generations. In this nexus, past, present, and future constitute a conflux of time in the individuals’ minds, enabling them to move intellectually backward and forward in time and generationally reshape the collective trauma narratives. Here, trauma can be both cause and effect of disconnection. It might even be the case that reinvention is a new form of trauma. The past is a necessary factor in the establishment of an identity grounded contemporaneously. The lack of a past or its traumatization is a cause for fragmentation. Memory and tradition together form identity in the process of making history co(n)temporaneous. Likewise, the present can be used to create meaning in the past and shape identity. The particular version of a collective identity changes with each individual and new generation, as everyone applies his or her personal history and views on collective history to create a new identity. Individual traumata are continuously retold in different narratives to work out and create meaning from various experiences. Collective traumata need to incorporate these stories as well as to meet the audiences’ respective imaginative or real contemporaneities. Identity is always in dispute within the novels and their audiences. Identity, as presented and interpreted here, requires constant redefinition and reinterpretation by each individual as well as by entire communities. It must be created from our versions of our pasts—the histories we construct from it—and our present—the meaning we extract from our own worlds. Identity, like history, becomes something the individual creates and chooses. 76 The characters grapple with their real or invented origins and where they will lead them in the present and future. 77 The emphasis lies on the essentiality of the past and a meaningful connection to tradition—retelling stories of the past with ever-new meanings, stressing the need to link the past and the present. 78 The authors and their characters explore the origins of contemporaneous identities. They are united in their attempt to create a meaningful world worth living in, and in their search for meaning and legitimacy in a world that is shaped by the traumata, memories, aspirations, and choices of a nexus of generations of ancestors. 79 Memory and the past, however, are not just relative or arbitrary stories that can be reinvented according to the individual’s pleasure. Certain historical landmarks are necessary, unalterable parts of each story, an underlying individual trauma that cannot and must not be negotiated. Facts like the birth and death of people, or the Shoah, cannot be invented away; one must take them into account while one reinvents one’s own personal, meaningful story around or from them. 76 Bryan Cheyette, “On Being a Jewish Critic ,” in Anglophone Jewish Literature, ed. Axel Stähler (London: Routledge, 2007), 34. 77 Josh Lambert, American Jewish Fiction (Philadelphia: Jewish Publication Society, 2009), 5. 78 Melissa Friedling, “Feminisms and the Jewish Mother Syndrome: Identity, Autobiography, and the Rhetoric of Addiction,” Discourse 19 (1996): 109. 79 Jeffrey Rubin-Dorsky, “Philip Roth and American Jewish Identity: The Question of Authent icity,” American Literary History 13 (2001): 86. 35 T h or s te n W i lh e lm Contemporaneity: Historical Presence in Visual Culture http://contemporaneity.pitt.edu Vol 6, No 1 “Boundless” (2017) | ISSN 2153 -5914 (online) | DOI 10.5195/contemp.2017.206 The texts represent continued explorations in search of meaning in history and histories they have inherited from their forebears. Moreover, it emblematizes their urge to interpret these meanings in new terms for a new generation. 80 Nevertheless, the novels emphasize that memory is almost never accurate; it is the meaning and the possibility of creating a whole self in a contemporaneity. Historical truth is only the marginal framework in which the manifold possible realities are narrativized. These narratives create a special sense of identity and co(n)temporaneity: not only identity in the sense of identifying oneself with the incidents of history and tradition but also that one’s identity is constituted by these incidents as individual stories form collective narratives. 81 The novels allow this through their conception of “a unique identity which springs from [an] origin and [a] story.” 82 Each individual has to take this basis of cultural identity to start on their path toward creating and exploring their origins, thereby creating and exploring themselves, thereby creating collective narratives to which others can relate. New articles in this journal are licensed under a Creative Commons Attribution 4.0 United States License. This journal is operated by the University Library System of the University of Pittsburgh as part of its D-Scribe Digital Publishing Program, and is co-sponsored by the University of Pittsburgh Press. 80 Yosef H. Yerushalmi, Zakhor: Jewish History and Jewish Memory (Seattle: Washington University Press, 1982), 18. 81 Max I. Dimont, The Jews in America: The Roots, History, and Destiny of American Jews (New York: Simon and Schuster, 1978), 216. 82 Ruth R. Wisse, “Jewish American Renaissance,” in The Cambridge Companion to Jewish American Literature. ed. Hana Wirth-Nesher and Michael P. Kramer (Cambridge: Cambridge University Press, 2003), 199.
https://www.researchgate.net/publication/321403769_Historical_Contemporaneity_and_Contemporaneous_Historicity_Creation_of_Meaning_and_Identity_in_Postwar_Trauma_Narratives
- et al. Abstract This dissertation addresses two basic questions: 1. Are people with highly superior autobiographical memory (HSAM) susceptible to memory distortions? 2. What is different about them that might offer clues that would help explain their ability? To answer the first question thoroughly, HSAM individuals and age match controls participated in a number of memory distortion tasks. In the DRM memory distortion word list paradigm we found that HSAM participants had comparably high rates of critical lure endorsement, indicating a vulnerability to false memories brought about by associations. They also participated in a classic misinformation experiment with photographic slides as the original event, and text narratives containing some pieces of misinformation. At the subsequent memory test HSAM individuals indicated more false memories than control participants, a finding that became non-significant when adjusting for individual differences in absorption. After a subsequent source test, HSAM and control participants had comparable numbers of false memories from misinformation. In semi-autobiographical memory distortion tasks, HSAM and control participants had fairly similar rates overall. For example, in a nonexistent news footage task using suggestion (also known as the "crashing memory" paradigm) 10% of HSAM individuals said they had seen the footage (a further 10% indicated maybe/unsure), whilst 18% of controls did (5% maybe; ns). A guided imagery task, with the same nonexistent footage as the target event, produced similarly increased rates of false report in HSAM (17% changed from "no" to "yes") and control (10% from "no" to "yes") participants. Memory for their emotions in the week after 9/11 was similarly inconsistent in HSAM and control participants. These results suggest that, relative to controls, HSAM individuals are as susceptible to both misinformation and reappraisals when the target events are semi-autobiographical. The second main research question asked what is different about HSAM individuals that might give us clues as to why they have their ability? To answer this we measured HSAM participants' and age/gender matched controls' on a number of behavioral measures to test three main hypotheses: imaginative absorption, emotional arousal, and sleep. HSAM participants were significantly higher than controls on two dispositions--absorption and fantasy proneness. These two dispositions were associated with a measure of HSAM ability within the superior memory participants. The emotional arousal hypothesis yielded only weak support. The sleep hypothesis was not supported in terms of quantity, but sleep quality may be a small factor worthy of further research. Other individual differences are also documented. Speculative pathways describing how absorption and fantasizing could lead to enhanced autobiographical memory are discussed. Main Content Enter the password to open this PDF file:
https://escholarship.org/uc/item/47w9488x
Professor Dovgopolova, you are currently in Odesa. Just this morning there were new reports that the city has been under attack during the night by Russian missiles. So of course our first question is: How are you and are you safe? Here in Ukraine, no place is absolutely safe at the moment, but Odesa is comparably calm. Our army has stopped the Russian troops in the neighboring city Mykolaiv. However, we have a threat from the sea and every day there are rockets. Some are destroyed by our air defense, some find their aim. But for the moment we are happy to have no human victims in our city. What was your first reaction when the invasion began on 24th of February? It was a shock. We woke up at five o'clock in the morning from the blows. It really seemed impossible! Of course, we all knew about the preparations and the Russian troops at the border. But it was impossible to believe, even though we have been living in a situation of war since 2014. In our minds, we knew about this possibility, but our souls could not believe it. However, after a couple of days, maybe two weeks, the situation changed. We understood how powerful our defense is and that we ought to get ready for a long struggle. Crowds of people joined the volunteering activities. We have a wonderful tradition of volunteering in Odesa. Those who could opened their businesses, the trams are running, cafés are open and people go to work. They try to live normal lives and to support our economy. You can even buy flowers in the streets! Many people were very surprised when I posted a picture of flowers on my window sill on Facebook. They asked: How is this possible? But this is part of our answer to the war, because we cannot all become mad. We ought to have a healthy mind, because in this situation the greatest threat is our panic. The Russian warships, which we can see from the shore, do not need to come so close – their rockets could reach us from Crimea –, so they come to push us to panic. Of course it is frightening when you see seven warships on the horizon, directed at the city. But we understand that we ought to stay strong and not fall into panic. How do the citizens of Odesa support one another? There is a great feeling of unity and solidarity. We can see it everywhere: For example, our local restaurants cook meals for the territorial defense, the actors of our theatres prepare new performances on the topic of war, and musicians give improvised concerts from balconies and in the streets. That is really great, because of course we know about all the dreadful things which happened in Mariupol, Bucha, and other cities of Ukraine, and we are in constant tension. So it is very important to support one another. In the past, there have been tensions in Odesa among different groups of society, especially since the beginning of the war in 2014. With the project “Past / Future / Art” you and your team, supported by forumZFD, have worked on bridging these divisions and opening spaces for dialogue on traumatic events such as the violent clashes between protesters in 2014. How united are the people in Odesa today? The situation has changed dramatically. Our project had addressed the problems which we saw in Ukraine, trying to find a collective memory, which contains the memories and identities of different regions, and to show them as a resource. Now, all these problems are on the margins, and the Ukrainian society is very united in the opposition to the aggression. People from different regions meet one another, as millions had to flee from their homes. So the questions of the past are not present now. But we meet the new challenges: The name of our project is “Past / Future / Art”. Now, the aspect of “future” is very important for us. It can be an instrument of resilience and give us power to oppose this attack. To have power means to have some image of the future. Ukraine not only needs military and humanitarian support, but we also need to work inside our society to find these instruments of resilience. And are you worried that the conflicts within the Ukrainian society will break out again after the war? For sure this level of solidarity is possible because of the situation of war. Afterwards, some problems will return. But with these new experiences, we have additional arguments for the public discussions. Some people in Ukraine had fears with regard to the Russian speaking cities, because they were sure that these cities would be ‘waiting for Putin’. These arguments were used by political leaders, who worked with the metaphor of “two Ukraines”: East and West. In their point of view, Eastern Ukraine was “pro-Russian”, and Western Ukraine was “Ukrainian”. Now we can show that this argument was wrong. Look at Charkiv, look at Cherson, look at these Russian speaking cities, and you will see that this is Ukraine. Even in cities that are now occupied, people are protesting with Ukrainian flags, in front of Russian soldiers. This experience is so powerful that there is no possibility that it could somehow just dissolve in a couple of weeks after the war. I hope that we will see the end of this war, and then we will work with civil society to create a new level of solidarity. In opposition to the authoritarian regime in Russia, it is now easy to see the real perspective of Ukraine: A perspective built on human rights, human dignity, and on our European choice. For me these are the main notions which we can see now and which we will use after the war to build our collective memory. The war is not only fought with of weapons, but also with words. There is a lot of false information on historical facts that is being distributed. What is your response as a historian? For a very long time, such rhetoric was considered the talk of foolish people. As a professional historian, you do not even have a way to respond to such absurd claims. When Putin says that Ukraine is a false state which was invented by Lenin, I do not know what to answer because my first reaction is: He is crazy. There is no connection with reality. We cannot have a discussion about historical arguments, when there are no arguments at all. So that is why for a very long time, historians did not discuss these theories. We considered our task to be in our academic space. For me personally, this was my problem. When I saw in 2014 how these false arguments were working to recruit people, to make them take up weapons and to kill one another, I was so ashamed that I left the academic field and started to launch public projects to inform people and to create spaces for dialogue. Because if a person without historical education listens to something on the television, he or she might think: “Maybe it is right”. I understood that all these problems which I had considered to be academic problems were actually not academic. History is a powerful weapon for manipulation, and from year to year the level of manipulation by Russia increased. One aim of the project “Past / Future / Art” was to develop a common understanding of history. Why is that important? Our collective memory is at the roots of our social identity. It mirrors our values, for example when we try to explain why this or that historic figure is important to us. The past is not the present, all these events are long gone. But it is important to discuss them. Our future depends on these evaluations: What do we want to protect in our society and what do we want to reject? Our past forms the basis on which we work on our future. How can peacebuilding continue in these times of war? I think that sharing the different experiences that people are making right now will create the basis for understanding. At the moment, each of us is in a certain point or place, and we have no opportunity to share our experiences, apart from the personal level. After the war, we need to create practices and instruments to learn about other people’s experiences – for example of people who lived in occupied territories, who lived in Mariupol during the siege, who fought in the defense of Kyiv, who fled to Western Ukraine or other countries. All these experiences are important and need to be presented after the war. We will collect them and we will show how we stayed strong in this situation. I am sure that this can be an instrument for reconciliation. There could be different activities: It could be publications, oral history projects, spaces for dialogue, theater performances… All these practices could be employed. So there is a lot of work ahead of you. Yes, certainly. We have already started: For example, with the team of our project “Past / Future / Art” and together with experts, we publish information on topics such as the protection of cultural heritage or on the Criminal Court in The Hague. This is important, because when we understand that these crimes could be punished, this can help to overcome the pain. International law is a very complicated field, so we see the need to ‘translate’ some messages into a language which people can actually understand. We explain for example about international courts on genocide and war crimes, and we started to publish materials on the criminal prosecution of propagandists. When Putin spoke about the “final decision of the Ukrainian question”, we explained the meaning of this rhetoric and pointed out the historical parallels. We also prepared materials on historic cases such as the Nuremberg processes or the genocide in Rwanda. It is important for people to see the ways through which justice can be achieved. We decided to publish short materials, but on a constant basis. And the reactions have shown that the people really need this kind of information – something that keeps their mind busy every day. Is there anything you would like to tell our readers around the world? For us in Ukraine it is very important to show to other countries that we are people who want to create a free and independent country. And we want to be visible. To give just one example: Academic centers in Europe have long ignored the field of Ukrainian studies. Often, they offer Russian Studies, and then something abstract on Eastern Europe, something dissolved in the Russian sphere of influence. It is important to renew this picture, because Ukraine is very different from Russia – and this has been the case for many centuries, not only since the collapse of the Soviet Union. We have our own wonderful culture. I hope that after the war Ukraine will find its place on the map of European culture and politics. Oksana Dovgopolova spoke with Hannah Sanders (forumZFD) on 8th April 2022. About: Oksana Dovgopolova is a historian and professor of philosophy at the Odesa National University. The fields of her scientific interests are the Philosophy of History and Memory Studies. From 2014 onwards, she took part in dialogic activities in Odesa and later initiated activities in the public space with projects on informal education and reconciliation of society in context of collective memory in Ukraine. She has worked with museums and cultural institutions in Odesa, Kyiv, Gdansk and Melitopol. With the support of forumZFD, Oksana Dovgopolova and her colleague Kateryna Semenyuk have developed „Past / Future / Art“, a cultural memory platform which implements educational and research projects, as well as a public program to involve the general public into working through the past.
https://www.forumzfd.de/en/we-want-free-and-independent-country
Daniella Wurst, PhD Candidate in Latin American and Iberian Cultures What drew you to your subject or discipline? Coming from Peru, a country that experienced a twenty-year civil armed conflict from 1980 to 2000, I have always been drawn to the question of memory and how a nation and the individual remember the past. Looking back on my experience of growing up during this period of political turmoil, I noticed the overlapping layers of my own memory: the way memories of childhood could be juxtaposed with memories of a turbulent time in national history. As a literary scholar, I became interested in the fruitful dialogue between personal and collective memory, and in the artistic representations and narratives that were born from such encounters. In the field of memory studies, I found an expansive and interdisciplinary platform where I could put visual and literary analysis in conversation with different disciplines: photography criticism, feminist scholarship, queer theory, and philosophical debates regarding history and temporality. Explain your research in fewer than 500 words. I trace the relationship between memory (both individual and collective), temporality, and aesthetic interventions in Latin American countries with a history of political and state violence—specifically Chile, Argentina, and Peru. Using a comparative framework, I put into dialogue how cultural memory representations can reveal a host of economic, cultural, and political tensions in the present. For this, I take into consideration what Andreas Huyssen has called the “memory boom”: the implosion of a global and contemporary obsession with the return to the past, and the subsequent creation of a memory industry that caters to the demands of this new market. In Tourist of History (2007), Marita Sturken argues that the imposition of a memory market runs the risk of producing a superficial relationship between the citizen and his or her national history. She describes this relationship as a particular mode in which the public is encouraged to experience history through consumerism, popular culture, souvenirs, and forms of tourism (museums, horror tourism, and architectural reenactments of the past). The experiencing of history through consumerism can lead to citizen disengagement and to a foreclosure of political engagement. In my research, I explore how cultural memory objects are inscribed in, respond to, or resist specific demands, whether from institutionalized cultural imperatives or the market demands of consumerism. I am particularly interested in how the visual language and literature in Argentina, Chile, and Peru can challenge both market imperatives that turn memory into a commodity, and also official discourses of memory, particularly those written by the truth commissions tasked with revealing past state wrongdoing in the hope of resolving conflict during the transition to democracy. I contend that though official truth commission reports sincerely reject amnesia and strive for reconciliation, these historical narratives adopt a chronological linear temporality that provides an artificial closure and settles the past into an informative, insular, cautionary tale. By pacifying the troublesome force of memory into a narrative that promises the healing of a newly reconciled nation, these state-sponsored narratives fail to take into account the porous boundaries of time: the way the past persists into the present through economic policies and unresolved political and social tensions. I explore artistic work—contemporary photographic projects, documentaries, visual and literary narratives—that are out of sync with the imperatives of the memory market or with state-sponsored narratives that promote a superficial recognition of the past in order to achieve an equally superficial sense of national belonging. In order to think about the fruitful space that opens up between subjective and collective history, I turn to the concept of “postmemory” introduced by Marianne Hirsch, defined as the relationship the “generation after” bears to the personal, collective, and cultural trauma of the ones who came before. By examining postmemorial work that reimagines the past through investment, projection, and creation, I think about how the subjective experience in the transmission of memory can destabilize dominant (progressive, linear) arrangement of time as well as our own understanding of the past. What impact do you hope your research will have? I strongly believe that, now more than ever, our understanding of the past is crucial to our understanding of the present and of history. I hope that my research can bring forth new ways to enrich and intellectually contribute to the fields of both Latin American studies and memory studies. I am conscious of and grateful for the influence that the field of memory studies has had in recent Latin American scholarship. Yet I am wary of how often the theoretical frameworks of memory studies are blindly applied to historical contexts and cultural objects without taking into account the richness of their specificity and the way this specificity can be used to challenge and enrich these theoretical frameworks. My own research and involvement in the Cultural Memory University Seminar here at Columbia has been crucial in showing me how important emergent voices and diverse perspectives are to broadening the theoretical canon of the discipline of memory studies. Thinking of the memory theoretical framework from the lens of Argentina, Chile, and Peru has made me strive to open a space of aperture that puts into dialogue the multiple ways that the memory work created in Latin America can produce a space of contention that can showcase the way memory can uncover and destabilize the historical accounts of the past as well as the idea of a stable collective and uncontested memory. I hope my research can move beyond the question of what memory practices reflect, and also address what they can disrupt and reveal as they intersect with universal contemporary memory paradigms.
https://www.gsas.columbia.edu/news/daniella-wurst-phd-candidate-latin-american-and-iberian-cultures
The German Mining Museum Bochum (DBM) already plays a key role in commemorating the Ruhr region's industrial heritage. The end of coal mining in the area in 2018 will further cement this role. Within this context the DBM aims to investigate the problematic aspects of authenticity in industrial culture with respect to the material dimensions of historical remembering and forgetting. Focusing on the Ruhr region's material industrial heritage, it will first look into the role that objects play in forming cultural identity. The aim is to analyse processes of social selection and negotiation, which are influenced and controlled by a variety of agents from the sphere of politics, culture, academia, and by the general public. Furthermore, it is important that research museums, in particular, make transparent selection mechanisms - such as conserving, forgetting, or destroying - and the alteration of historical objects through restoration, for instance. The investigation of the collection's holdings within the framework of the Deutsches Museum Digital Initiative involves detailed cataloguing and digitalization. The collection of some 3,000 rolls of musical notation for self-playing pianos is the pilot project. The aim is to create a homepage with a database and an audiovisual presentation of the scanned rolls and enable diverse research on the topic. Piano rolls are historical attempts to conserve the ephemeral art of music. Over a century ago they were widely in use as a storage medium for the reproduction of original music. A unique method of notation was developed to conserve the music, thus giving it material form. The aim is to make this music accessible again and to save it for posterity through the process of digitalization. Piano rolls represent the first transformation of the musical performance, the digital transformation the second. Does the digital reproduction convey the original musical interpretation? Or does the transformation create an aesthetic of its own? These questions are central to research into music performance and lead into discussions about how the various layers of authenticity can be reflected in museum displays. The German Maritime Museum in Bremerhaven (DSM) will be exploring the ongoing technical development of virtual reconstruction and the opportunities that it offers to preserve decayed, or reconstruct no longer existing, objects using the example of the “Bremer Kogge” (Bremen cog). The project is being carried out together with a Fraunhofer Institute from the FALKE Research Alliance (Research Alliance Cultural Heritage). The research project aims to examine whether the museum is purely a site of the original or whether it is also a place of authentic experience -inextricably linked to the physical presence of the spectator. The Georg Eckert Institute for International Textbook Research (GEI) in Braunschweig aims to analyze in international comparison the actors, discourses and practices involved in communicating history via the medium of the school textbook and the tension between remembering and forgetting. Its researchers will combine three methodological and theoretical approaches. Firstly, they will examine the negotiation processes that take place prior to the production of history textbooks that define which representations of history are relevant in schools. Secondly, they will carry out a diachronic comparative study of the representations of history that were published in school textbooks. With reference to the concept of path dependence, they will examine to what extent historical accounts currently in circulation make reference to earlier versions. Thirdly, GEI projects ultimately seek to investigate, drawing on practice theory, how pupils and teachers adopt the representations of the past found in school history books. In museums, the changing representation of the past is an indicator of changing concepts about history and society. By deconstructing those representations, we can reveal practices of authorization and show, too, how the past has been suppressed. The Germanisches Nationalmuseum (GNM) in Nuremberg aims to show the parameters that determine the “reality of the past” in a museum context, using appropriate methods to engage visitors. Societal issues, but also conservation issues, such as age and state of repair, determine what parts of our material culture are deemed to be worth preserving. Sometimes, intentional manipulation or adaptation to new functions also plays a role. The GNM project aims to illustrate this using the example of the Behaim Globe, the oldest existing representation of the earth in spherical form, which was made immediately before the discovery of the New World in 1492. By looking at a range of aspects, such as the numerous revisions of the cartographic information, restorers’ optimizations of the globe, facsimiles, exhibitions and films about its history and the globe’s elevation to the highlight of the collection, we will demonstrate what role museums play and continue to play in such processes. This will be contextualized by displaying it together with comparable objects. The Herzog August Library in Wolfenbüttel (HAB), which was inspired by the spirit of the Renaissance and the Reformation, bears authentic testimony to the scholarly and religious learning and the collecting habits in the Early Modern Age in Europe. Manuscripts dating back to the pre-Carolingian era are also preserved in the library. The documents that have been brought together over the centuries are themselves part of a system, by courtesy of the place allotted to them, and have acquired new authenticity through the process of binding and ordering. This palimpsest of authenticity, which served to underpin social prestige, is an object of study at the HAB, which is not just a library, but an independent research centre in its own right. Part of this research includes investigating what was not deemed worthy of collecting, as well as what was. The project at the Herder Institute for Historical Research on East Central Europe – Institute of the Leibniz Association (HI) in Marburg aims to reveal general forms of aesthetization and scholarly conceptualization of the allegedly “original” form of the nation and the national. It will focus, in part, on conflicts of interpretation and the instrumentalization of traditions. Harnessing new media tools, it seeks to investigate how constructions of “authentic national identity” arose and how they were formed. Another important aspect of the undertaking will be to examine the contemporary relevance of ideas of authenticity, for example, pertaining to concepts of the nation, ethnic minorities or regional identities and their implications for political culture and the European idea. In addition, it will look at the various ways in which “authentic” aspects of national identity are popularized, trivialized, subject to irony and showcased in museums. The project is closely linked to the research program of the Leibniz Graduate School "History, Knowledge, Media in East Central Europe". The work of the Peace Research Institute Frankfurt (PRIF-HSFK), Frankfurt am Main, and the Zentrum Moderner Orient (ZMO) in Berlin will concentrate on conflict resolution, cultures of commemoration and the process of coming to terms with the past in post-conflict societies. The ZMO will focus on topics such as progress and decline and changing historical perceptions of them; local historiographies and competing claims to interpretative predominance and the role of cultural and regional specificities in the handing down of traditions. The regions that we focus on are Africa, the Middle East and Central, South and Southeast Asia. The PRIF, on the other hand, will concentrate on conflicting historical interpretations and the struggle for power and recognition in post-conflict societies. The institute has research expertise about Bosnia-Herzegovina, Macedonia, East Timor, Guinea-Bissau, Angola and Northern Ireland. The project at the Institute for the German Language (IDS) in Mannheim focuses on the relationship of political and social upheavals, language change and shifting value systems in central commemorative discourses of the Twentieth Century. The goal of the project is to trace, using examples from a diachronic and systematic perspective, how changing ideas about what is “authentic” become visible in texts, how they affect our ideas of history, and how they change over time. In this project, we will consider the concept of collective memory as a category denoting authentication strategies. Collective memory and authenticity are two directly related categories in the sense that, firstly, when an object is recalled it gains the status of an “instance” of collective memory and, secondly, when a memory community labels it as “true” or “genuine” it fulfils the authenticity criterion. Based on this premise, authentication will be depicted as the outcome of social negotiation processes and re-interpretations. Simultaneously, a special focus will be placed on the relation of these linguistic authentication strategies and the ethics of commemorative discourses, for instance about the guilt discourse after the First and Second World War. European societies made use of distinct historical and religious values and achievements of civilization, which created a predominantly implicit, but also an explicit canon of cultural traditions. These canonisations correspond with the “fluid” boundaries, the centres and peripheries of Europe as a “space of communication”. Within the scope of this project, the Leibniz Institute of European History (IEG) in Mainz will focus on the constructions of authorities that determined cultural heritage in Europe. Heuristically, the project distinguishes firstly between agents and institutions, secondly the linguistic “instances”, which constitute sense in the process of authentication, thirdly the modes of attribution, fourthly the arguments, strategies and modes of action, fifthly the media and discursive formations of such constructions of authority, which are linked with explicit and implicit norms. The IEG will address these questions in the following research topics: the historicity of biblical interpretations; Jewish historiography and social norms; global processes of authentication; ecclesiastical authorisation strategies. The Leibniz Institute for Regional Geography (IfL), in Leipzig and in particular its research groups on the “Geovisualisation” and the “History of Geography” will analyse the role of maps and atlases in processes of commemoration and forgetting. They will explore the significance of maps in complex processes of cultural and knowledge transmission and their function in different communicational contexts. With reference to findings from the field of critical cartography, it is assumed that maps are a form of geocoding that function via sign systems and are used to portray “reality”. Maps thus have a major role to play in shaping and ordering knowledge; they can be used successfully as instruments for generating consensus and unity because they establish (certain) collective patterns of seeing and construct apparently consistent spaces. The project will examine their significance as powerful instruments for asserting specific interpretations of the past and in creating (new) communities and identities. The relationship between perpetrators and victims played a constitutive role in the history of the Twentieth century and its difficult cultural heritage. Identifying oneself as a victim or as belonging to a group of victims appears to be one of the most important strategies in gaining recognition in the culture of commemoration, according to Ulrike Jureit. The project of the Institute of Contemporary History (IfZ) in Munich/Berlin, will focus on the construction of victim groups and the impact of their claims to authenticity on collective memory in international comparison and contexts. The conflicts between history as an academic discipline and as commemorative culture can, in part, be traced back to the discourse about perpetrators and victims. When individuals identify themselves as victims, or are recognized as such by others, this often creates ruptures in collective memory that, in turn, impact on academic historiography. From a historical and comparative perspective, the project analyses the mechanisms of victim construction in different national cultures. Firstly, it will explore how societies and societal groups constructed victims’ identities. Secondly, it will focus on the experiences of victims caught between the conflicting poles of personal and group experience and their interpretation of historical events. Thirdly, it will explore the activities of victims’ organizations and the public discourse about appropriate commemoration. Fourthly, the project will analyse the influence of state politics, legislation and justice in the process of constructing victims’ identities. The joint project of the Leibniz Institute for Regional Development and Structural Planning (IRS) in Erkner and the Centre for Contemporary History (ZZF) in Potsdam will analyse urban identity building processes and identity attribution by using examples from selected cities. It will focus on the public and urbanistic discourses about historic city centres as a topos, and on discourses about the controversial reconstruction of lost buildings in the name of “authenticity”. The project will critically examine established research theses such as the “traditional shift” in urban development around 1975. The goal is to describe the different “regimes of historicity” (François Hartog) in specific European and non-European urban areas from the present right back to the Nineteenth Century. In addition to the public debates about the historical aspects of urban development, the project plans to draw on documentation held in museums and the archives of historical associations and grass roots organizations, study urban master plans and analyse reconstructions of historical buildings and city centres. Digital media are playing an increasingly important role when it comes to communicating historical information, be it in the form of interactive, walk-in reconstructions of historical places or situations, or the three-dimensional reproduction of historical artefacts. But there has been little investigation into how they impact on the construction of historical meaning in the minds of their audiences. For example, do digital reproductions of historical artefacts represent an adequate substitute for real objects? Or do viewers feel deprived of authentic experience by the use of virtual reality and does this have a negative impact on comprehension and historical consciousness? Does the reconstruction of historical places and scenarios help spectators to transport themselves back in time and enable them to better imagine this historical world, or does it lead to a loss of distance and critical reflection? And how can reconstructions make visible the difference between what is factual and what is plausible, so that visitors notice this difference and take account of it in their processes of understanding? The Knowledge Media Research Center (KMRC) in Tübingen intends to investigate these issues on the basis of current theories of perception and cognitive psychology as well as a range of empirical methods in laboratory experiments and field studies and thus make a contribution towards the analysis of reception patterns of authentication. The significance of natural history collections as an infrastructure for research into biodiversity and evolution has increased steadily in recent years. The cultural aspects of such collections are, in addition, in the process of comprehensive consideration by the natural history museums of the Leibniz Association: the Berlin Museum für Naturkunde – Leibniz Institute for Evolution and Biodiversity Science (MfN) and the Senckenberg Gesellschaft für Naturforschung – world of biodiversity (SGN) in Frankfurt. In 2013, the MfN established a new department (PAN) to serve as a place of dialogue about its collections and research within the fields of cultural studies and the humanities. Its aim is to deepen our understanding of “nature” by developing new questions, topics and methods. Furthermore, the Senckenberg natural history museums are taking an intercultural approach to natural history. These and other MfN and SGN initiatives are promoting and setting up research and education projects in cooperation with partners from social sciences, cultural studies and the fine arts. One overarching goal is to investigate and communicate the production and presentation of knowledge at natural history museums in the past and the present and conduct an interdisciplinary discourse about their role in the Twenty-First century. Both organizations are in the process of examining the social, political, cultural and historical contexts in which the concepts of nature, natural history disciplines, the collections and their public presentation have changed over time. Contemporary methods and procedures in the field of archaeological restoration enable researchers to be much more precise about the manufacture, the function and the history of archaeological artefacts than they could in the past. Scientific methods are being used more and more in restoration work to answer questions about how artefacts were made, identify their constitutive materials and determine their origin. This development has transformed the work of restorers. No longer are they concerned with getting as many objects as possible ready for display. Instead, the focus has switched to an intensive examination of the individual object as a carrier of information. Admittedly, however, this investigation is often not sufficiently systematic, frequently resulting in misinterpretations. There is, in addition, within the field of restoration and interrelated academic disciplines controversy about how far it is permissible to change the existing condition of an archaeological object by, for example, removing deposits on its surface, by augmenting missing parts or even altering the original substance to gain information. To address these questions relating to historical authenticity, the project at the Römisch-Germanischen Zentralmuseums (RZGM) in Mainz aims to formulate and implement, as an example of best practice, new parameters relating to restoration work. The Centre for Contemporary History (ZZF), Potsdam will investigate the emergence of the witness to history as the bearer of authenticity at various levels of historical representation in the Twentieth and Twenty-First history. This figure has only gained public significance over the last fifty years, rising to prominence firstly with the Eichmann Trial. The idea of the “contemporary witness” linked the “authentic” immediacy of historical experience with the corroboration of contemporary norms and values. The research project compares the rise of this figure with the growing sacralizing aura of the authentic in contemporary material culture and memorial sites, which attempt to make tangible the “authentic place” and the “authentic object”. While researchers have drawn on “contemporary witnesses” since the establishment of Oral History in Germany in the 1980s, the use of methods from Material Culture to analyse historical objects has not yet become established in the field of contemporary history in Germany. We want to take the opportunity in this project to harness these methods for contemporary historical research.
http://www.leibniz-historische-authentizitaet.de/en/research/approaches/
Yue Min Jun is one of the most representational artists in contemporary Chinese art. His works are mainly known for a laughing bald-headed ‘idol’. These idols are gorgeously colourful and grosteque, with some even bloody. Though it is a kind of shameless laugh, it contains the pleasure and humour of breaking taboos. Yue’s artistic creation is an exaggerated narrative, indulged imagination and continuous experiment. It highlights the essentials of existence via absurdity. It could be shown that Yue’s objective is to offer the major visual image in the scene the most gigantic effect regarding form and content, with clear and fruitful symbolism and metaphors. Consequently, visual images get into a realistic layer that can be signified from an unrealistic layer so that Yue is able to express his deep concern about actual social livelihood. Thus, Yue is creating an ‘idol’with features of Chinese history and Zeitgeist rather than imitating his own consistent facial expression. Because of Yue’s special way of creating, offering special attention to the meaning and value of his works becomes natural. In fact, for an artist who does not wish to hide or conceal, the normal view and conversation can perhaps be more vivid and harmonious. Examining Yue’s works one by one, it always appears that some of the terms are particularly noteworthy. Perhaps this is the key to interpreting and analysing Yue’s art. Absurdity Yue always directly expresses absurd content in an absurd way, depicting a tattered world via a tattered art form. He displays a mingled cultural ecosystem with so-called ugly image expressions, And this kind of absurd way of creating originates in the paradox and conflict between rational principle and objectively reality. Yue’s creative consciousness is inspired by the irreconcilability of living experience, memory and circumstance, revealing the fabrication and deceptiveness of the rapid change in reality in china or even the absurd living landscape; meanwhile, the metaphor of the mindset that ‘existence is absurdity ’ is made. While we are soaked in each seemingly absurd scene, and while the viewer converses with each work of irony, exaggeration, satire and ridicule, we can feel a certain lively eccentricity in human nature, paradox in existence, and the query raised by the artistic subject. It seems that he is stressing the formalist interest in visual images; that is, the conversion of raffish self-idolatry to pure form, and is displaying this kind of formalist feeling in a dizzying way. Even though these images consist of exaggerated, virtual and perverse treatment, they display the absurdity of life in absurd circumstances. Absurdity and freedom co-exist. What is released from absurdity is another appeal to freedom and liberation deep in the human soul. Underneath absurdity it revealed Yue’s mockery and resistance to unreasonable existence in reality. Irony Irony is the attempt to express an attitude of criticising and ridiculing various statuses of absurdity, and to restrictively mingle signs, images and landscapes which are generally confrontational, disharmonious or even conflicting. This is exactly the objective of Yue’s irony-that is, to transform and exaggerate the heroic imagery, political and historical patterns established by the authorities through imitation and exaggerative treatment, in order to express satire regarding the ideology behind these images and to disclose the absurdity of both the maker and receiver of eccentricity. This is the ‘ironic’ feature that stands out in Yue’s pictorial language, the organic construction of his ‘Neo-Idolatry’. Parody Adopting post-modernism techniques, Yue appropriates and parodies classic paintings from China and foreign countries. This is a ‘distortion’ of classical concepts, symbols or famous pictorial texts in Chinese and the foreign history of art and photography. The ‘cosmetic surgery’ he performs on classic works usually extends to copying, duplication and transplantation, mingling self and the other, poetic sentiment in history and dilemma in reality- this is a kind of ‘doctoring’ beyond words. The key is that he detached himself from the image and character at the heart of the scene with the obvious intention of removing the mystique from it. Since people’s aesthetic taste is basically stereotyped, Yue’s parody familiarises certain modes in order to control viewers’ expectations, enabling viewers to perceive the amazement generated by a new perspective and the freedom of breaking away from classical regulations. Furthermore, this type of parody attempts to eliminate the interpretative model regarding the oppositional relationship between superficiality and depth, reality and unreality, signifier and signified and does not provide the deep value in modernist or pre-modern classical works, or generates ‘depth’ on the surface. The disappearance of depth implies that the traditional mindset of investigating depth is deconstructed, and a new visual tension is produced. The objective of this attempt is to provide a diverse concept to recognise not only history and the classics but real society, too. It also treats the classics within the classics and uses the energy remaining in the classics to ruin their control so as to change the direction of classic works of the past. By disclosing the limitation of an accepted visual model in order to challenge the authority of a single narrative, Yue’s works not only tell that the standards set by these so-called well-known classic works are temporarily and unreliable but also enable viewers to understand the pleasure of breaking taboos of profanity, amusement, absurdity and humour, and their encounter with laughter even more thought-provoking. Maze Yue mixed his experience about the transformation of Chinese cities accumulated in this drift with his imagination, showing his ability to cross various boundaries freely. Virtual reality passes through the real-virtual binary opposition through maze, so that a mingled relationship appears between time and space and between human and objects. The lifelike playful [art] form is colourfully illusive and addictive. This indicates that his creation surpasses the pursuit towards ‘realism’ and stresses one of the functions of ‘uncertainty’ in a new context, an unmanageable infinite change. For Yue, this kind of ‘uncertainty’ is not simply a form; it is a concrete but incremental life experience. The dizzying complexity shows the deconstruction of ‘certainty’. The works is divided into two parts: ‘past’ and ‘present’. The tracks of the ‘past’ can usually be traced, since one’s experience in the ‘past’ are usually determined. Their development is usually associated with the changing process of China. However, the ‘present’ is full of playfulness because one’s fate at ‘present’ is uncertain. It almost randomly shifts people’s identities throughout various time and spaces. Their roles are constantly changing and their vestige of change cannot be found. Every kind of these bizarre wonder ‘drifted here’ stresses the power of capital, information and personnel, which construct a kind of constant flow of desire. It seems to be a maze, and is a metaphor of formless and infinite power which is living. Pop Yue’s major art language is pop. He adopts resources like Chinese socialist propaganda paintings as well as Chinese and foreign consumer ads. In his earlier works, many backgrounds comprise symbols such as Tiananmen Square, red flags, the sun, slogans, red lanterns, red balloons and military caps, symbolising the historical tracks of a beautified life, optimistic and encouraging promotion by authority. Meanwhile, he intentionally mixes and displaces these symbols with the visual characteristics of desire for consumption in the post-90s Chinese market economy. The image of simple, gorgeous, superficial and direct advertising and dizzying colours have become a highlight of Yue’s works. Following the Chinese economic reform over some three decades, now China is developing lifestyles in the ultimate absence of ideology. With cultural tradition voided and political passion and sensitivity cooling, Chinese real lifestyles are relatively Westernised but they have no spiritual sustenance and reliance. The pursuit of materialistic and entertaining elements occupy all of their daily life. From the late 90s to recent years, aesthetic, leisure images like scenery, gardens, flowers, birds, animals-even the sky and universe-appear in some of Yue’s works. They usually serve as the synecdoche of beautiful things and states of mind. Yue’s works contain not only the experience and memory of individual development but the aesthetic interest and transformation of the age, constructing the tracks of life regarding the changes in Chinese society. Examined from this perspective, Yue’s works precisely suit and correspond to modern society and the essential characteristics of the rush for modernised development. In other words, his works are measuring or emphasising the actual change in Chinese society during the process of transition. Expressing the theatrical gesture and ambiguous state of mind of false pretence influenced by commercial culture in an exaggerated and humorous way, the works serve as an attempt to treat the fashionable and virtual life arising in contemporary consumer society by typifying and popularising so as to depict the lively and absurd circumstances of material idolatry in this age. Multiplicity Yue’s works are always displayed with multiple permutations of images of bright and fresh shapes of head, exaggerated limbs, large laughing mouth, and refined, delicate teeth. These images form the consistent characteristics of symbols in Yue’s art. The conceptual background derives from one of the characteristics of modernised industrial civilisation. Modernised industry is typified by the assembly line; modern high-rise architecture, goods and mass media are standardised and multiplied products, which stress standardisation, repetitiveness and reproducibility. Hence, impersonal facial makeup and masks are the essential features of modern social life, also the boundaries between classical and modern emotions. Our real life order is also neatly schemed. Every existence of life is a kind of mechanical duplication. It seems like the social roles they perform are condensed into a certain format. These kinds of standardised and multiple limitation are converted or even distorted and interfered with in Yue’s works in order to make us think out of the box of orderly conventional rules. Treatment ‘Treatment’ implies the change of the treated object. Yue’s treatments are in two ways, one of which is to copy these images by ‘blenching’, and to cover the original image with brush strokes of ‘rude, aimless rotation’ [Yue’s quote, please refer to ‘Treatment-Another Track of Mine’] before it dries. The other is to attach two works with wet grease paint and make a 360-degree rotation. Visual distancing and blurry effect characterise the treated new work. The viewer must catch a glimpse of the ‘truth’ of the original by detouring Yue’s twisted drawing tracks. These are the measures and strategies he adopts to strengthen the ‘treatment’ with experimental destruction. The so-called ‘objectively original picture’ is a pictorial or photographic creation in a particular historical period attached to the cultural features and ideology of that stage. Images are only symbols within them. Thus, Yue is cleansing his memory and experience of the past rather than treating images. This kind of memory and experience is not private but almost generally shared. And Yue’s measure and investigation of memory and experience starts from cleansing, leading to the overthrow and deconstruction of original creative consciousness and narrative methods through the ‘treat-treated’ relationship; perhaps this is a new attempt at ‘new drawing’ or ‘anti-drawing’. There is not a way, not even a magnificent and monumental historical book, which enables the reminiscent to represent life in the past. There is no kind of news, not even that reported by various Chinese and foreign media, which can comprehensively and objectively record historical truth. But a picture or a photo is always more capable of presenting the inner world and status of human in a certain historical stage and reality because visual images are always more lively when they are placed on the boundary of characters and language. After a period of examination, the concealed reality and change in age or even ideology would, strikingly, manifest themselves voluntarily. Perhaps for Yue, our ability to create a suitable mode to accommodate complicated human experience is extremely weak, thus people have to constantly change their perspective, investigate new metaphors and create new forms in order to resist modes or confusion, so that the other memories and narratives of real life that have long been suppressed by tradition and politics could somehow be liberated and expressed.
http://yueminjun.net/contens_en.php?id=26711
One of the main issues characterizing the current debate on social sciences (with a proliferation of Memory Studies especially since the 1980s in the Anglo-Saxon and European area) concern the role of memory not only related to its theoretical definition, but also referring to its possible use as an interpretative tool in the empirical analysis of social and cultural processes. The studies on the social origin of memory have developed in different fields, from the sociology of Halbwachs, Assmann, Cohen, Lavabre, Zerubavel, Jedlowski, Namer, Jelin, to other philosophical and historical perspectives authors such as Nora, Ricoeur, Ost, Le Goff, Jenkins, Arendt, Benjamin, Kracauer... All these strands have contributed to a systematization of memory by placing it in a multidisciplinary field. To remember we need others. This is because our memories, including the most intimate and personal ones, only acquire meaning when they are shared with an emotional and social community that will contribute to their elaboration. The memories of individuals are not, therefore, able to construct, in retrospect, social frames of reference, but are the tools used by the collective memory to recompose an image of the past that is incessantly modified and re-described orienting the future. A noticeable contradiction emerges here, namely that memory is exercised starting from the present and not from the past. Reword: we only remember what we have reconstructed. Social thought itself is essentially the expression of metamemorial narratives: elaborations, recompositions, shapes, negotiations of memories, in dialectics between memory and oblivion, among the members of a more or less vast social group. There is no memory without a collective re-interpretation and renegotiation - and the category of counter-memory becomes relevant. Starting from the issues related to memory – and from a sociological-cultural outlook above all - will be evaluated: contributions of an exclusively theoretical approach; contributions which, starting from empirical research, produce theoretical reflections on the topics of the call; contributions in which memory is an investigative tool for reading social and cultural change - i.e. the use of life stories, narrative interviews and other biographical tools in which memory assumes a prominent importance from a methodological point of view. The subject matters and topics of the call could be, in a non-exclusive way: ⁃ Old and new narrative forms of memory. ⁃ Post-industrial memory. ⁃ Media and memory. ⁃ Memory, justice and power. ⁃ From post-colonial memories to migrant memories. ⁃ Memory and neuroscience. Propositions presentation The abstract (max 500 words) can be written in Italian or in English and sent at the email addresses: [email protected] [email protected] [email protected] The proposition needs to contain: Name, Surname, Institue of provenience and academic position of the author; Provisional title of the article; Indication in the email Subject line: “Call: Shaping memories in contemporary narratives” Times:
http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=107851&copyownerid=163916
4. Why do we remember events? 4.1. Terroristic Attacks Lack Meaning 4.2. A Sudden Loss of Security 4.3. Media Attention 4.4. Embodiment of Memory 5. Conclusion References. 1. Introduction Worldwide numerous terrorist attacks have shattered societies. In recent time, especially those generating a sense of the West versus the Muslim world, gained large public attention such as the attacks of the World Trade Center in New York City in 2001 and the bombings of Madrid in 2004 and London in 2005. Attacks, such as these, have not only happened on ‘Western ground’, but also Indonesia has been a victim of several terrorist attacks mainly targeting sites predominantly visited by Westerners, such as the hotel bombings in Jakarta in 2009 and the Bali Bombings in 2002 and 2005. The initiator of these terrorist attacks was the Islamic group Jemaah Islamiah with its spiritual leader Abu Bakar Bashir (West, 2008). This organization strives for an introduction of Shariah law in Muslim nations and perceives the Islamic faith to be oppressed by increasing influence of Western values in the Islamic World (West, 2008). Jemaah Islamiah, thus, justifies its attacks as defending the religion of Islam and its values from the perceived thread of the Western influence. Large attacks such as these in Indonesia lead to a large media attention, especially the random and high number of foreign victims lead to worldwide attention (Blakesley, 2007; Crenshaw, 2000; Turk, 2004). Therefore following definition of terrorism by Gibbs (in Turk, 2004, p. 284) will be used in this paper: Terrorism is threatening, perhaps illegal, clandestine (avoiding conventional warfare) violence against human or nonhuman objects that is intended to change or maintain some belief, law, institution, or other social "norm" by inculcating fear in persons other than the immediate targets. Gibbs, therefore, considers an attempt for social control as a possible base for explanatory theory (Turk, 2004). Johnson (1994 in Crenshaw, 2000, p. 415) states how the loss of order and control leads to an exaggeration of the likelihood of such an attack. Destabilizing society by shattering its moral values is used to put forward a political message. “’Memory is the meaning we attach to experience, not simply recall of events and emotions of that experience' (Stern, 2004 in Barbara, 2009, p. 83); and is thus necessary to make sense of the present; it provides a time and space reference and is therefore also crucial in order to build the future” (Barbera, 2009, p. 76). Halbwachs distinguishes between autobiographical memory which is the memory of our firsthand experience, historical memory which is gained through historical records, history which has no organic relation anymore and collective memory which contains the active past that constitutes identities (Olick & Robbins, 1998). Community members share their experiences and their memory and thus create a collective memory which is in a constant exchange and negotiation between the individual and collective memory while each influences the other (Lambek & Antze, 1996). Memory and its active remembrance offers “… a constructive engagement with a fractured past and a moral judgment ...”, but can also turn into a powerful political tool when controlled (Argenti & Schramm, 2010, p. 19). Therefore it is necessary to recover memory, to tell and write the truth about an event, and to publicly recognize the value of heroes and the suffering of victims (Misztal, 2004); “[u]nless the memory can be deposited and can be expressed in a dignified way, the unrest continues” (Barbera, 2009, p. 83). Jelin (in Barbera, 2009, p.83) describes this unrest as “…scars of memory - the scars that are not visible to the naked eye but are always present and do not heal: 'There is no rest [for survivors] because memory has not been 'deposited' anywhere; it remains only in the minds and the hearts of people”. In order to avoid an unrest of memory, collective memory “… is embodied in regularly repeated practices, commemorations, ceremonies, festivals, rites and narratives" (Misztal, 2004, p. 76). The Bali bombings, for example, are still remembered in various forms: there are commemoration ceremonies each year, monuments were built, the media covers stories about the bombings every year. The film Long Road to Heaven - Tragedi Bom Bali Tahun 2002 (Sinaro, 2007) was made which shows what happened in the night of the bombings. This paper will use the case study of Bali bombings in order to explore why terroristic attacks are remembered and how these are remembered. While all these spheres are highly intertwined this paper will first explore more in depth why people remember and then will then focus on the lack of meaning and the loss of security due to terrorist attacks, the importance of media attention and eventually the embodiment of traumatic memory. 2. Methodology The theoretical background information will be gathered by literature review, especially journal articles and books. Since terrorist attacks do usually not happen on a regular basis the case study of the Bali bombings was chosen in order to describe and explain why such an event is remembered and how it is remembered (Schnell, et al., 2013). Yet it should be kept in mind that the illustration and explanation may not be generalisable but unique to this certain case (Schnell, et al., 2013). The case of the Bali bombings will be demonstrated by exploring various materials online such as journal articles, newspaper articles, interviews, books as well as the movie Long Road to Heaven. These sources offer information on why and how the Bali bombings are commemorated and provide an insight in how collective and individual memory is interconnected. 3. The Case: Bali Bombings of 2002 and 2005 On 12th October 2002 one bomb exploded inside Paddy’s Bar at 11:05 p.m. which was brought inside by a suicide bomber and less than one minute later another, even stronger, explosion occurred in front of Sari Club (West, 2008). Paddy’s Bar as well as Sari Club were situated at Legian Street in Kuta which is especially popular by young Western tourists which can be seen in causalities of 164 Western tourists (West, 2008). In total 209 people were injured that night and 202 people died, among them 88 Australians, 26 British, 25 other Europeans, 7 Americans and 38 Indonesians (West, 2008). The Indonesian causalities were kept at a minimum as no Indonesians were allowed inside the Sari Club, thus most of the victims were Indonesians working at Paddy’s Pub and other random people that happened to be around those places at that time, such as taxi drivers (West, 2008) These terrorist attacks were conducted by active members of the Islamic group Jemaah Islamiah who chose this destination most likely “… because of its status as a ‘mecca’ of Western tourism and a perceived place for moral licentiousness” (West, 2008, p. 339); also the attack was timed for a busy time leading to many causalities: “… the bombing occurred at one of the busiest times of the year in Kuta, when the normal stream of budget tourists from Australia, Europe and Japan, including surfers and other alternative types of travellers in South East Asia …, are joined by Australian sporting teams, largely from Australian football, rugby league and rugby union codes, making their annual end-of-season trip to ‘party hard’” (West, 2008, p. 340). Another attack happened on 1st October 2005 at around 8 p.m. at the beach in Jimbaran and in the bar and shopping area in Kuta and caused more than 20 fatalities and over 100 wounded (Quijano, et al., 2005). Again the timing was well planned on a weekend that was a school holiday in Australia and many had travelled to Bali for a short vacation (Quijano, et al., 2005). In Jimbaran two bombs exploded at the beach of Jimbaran around 40 meters apart from each other where many people gather to watch the sunset or eat at the beach restaurants (Quijano, et al., 2005). The bomb in Kuta was not far from the 2002 bombings in a restaurant (Ni Komang Erviani, 2012). 4. Why do we remember events? Collective and individual memory are not independent entities but dependent on each other and are constantly reshaped and recreated depending on the social context and beliefs currently predominant in a community (Argenti & Schramm, 2010; Lambek & Antze, 1996). Memory is an essential part of shaping and reshaping identity since identity is dependent on how we present ourselves by our past stories, what we wish to forget or to remember and how we remember our past, on the individual as well as on the collective level (Lambek & Antze, 1996). This identity is part of a community’s culture that can give on memory in an embodied form such as “regularly repeated practices, commemorations, ceremonies, festivals, rites and narratives” (Misztal, 2004, p. 76); and is furthermore related to how memory is used in daily practice by the individual and the collective (Lambek & Antze, 1996). Paul Connerton distinguishes between two dimensions to embodiment: on the one hand, he highlights the importance of ritual and ceremonial performances as commemorative acts which allow a community to reassure itself; on the other hand he refers to 'habitual memory' throughout which a 'mnemonics of the body' (1989:74) finds its expression" (Argenti & Schramm, 2010, pp. 7-8). Bodily practice manages to shrink the distance between the past and the present (Argenti & Schramm, 2010). Especially in the 1980s and 1990s commemoration ceremonies gained popularity as well as sites of remembrance which can on the one hand be explained by the major historical events coming to an end in this time, such as the Cold War, and on the other hand these memories constitute political culture and collective identities which are essential elements of democracy (Misztal, 2004). Today’s society with often decreasing national narratives, due to an increasing number of democracies, often form smaller ‘memory groups’ that share past memories (Misztal, 2004). Especially memories of sufferings seem to play a crucial role in group identity building (Misztal, 2004). This group identity can be based on being affected by a major traumatic event such as the Bali bombings which can among others be represented by (non-government) organizations. Particularly “unexpected and emotionally laden events attract more attention and are better remembered than other more neutral events” (Pennebaker, et al., 1997); which was the case with the Bali bombings when the bombs shattered the image of Bali which was until then known for being a peaceful place. While there is usually an official version of memory created, counter-memory plays a crucial role in offering an alternative story to the official version, often even contesting official narratives (Olick & Robbins, 1998). "By placing trauma at the heart of counter-memory, what is remembered gains in moral weight as, in order to preserve the moral order, it becomes a duty to remember the past horrors. The duty to remember consists not only 'in having a deep concern for the past but in transmitting the meaning of the past events to the next generation'" (Misztal, 2004, p. 78). Therefore past events of trauma should be remembered in order to avoid a future trauma of the same kind. 4.1. Terroristic Attacks Lack Meaning Father, I miss you Miss the days spent with you Father, you departed before my lips could call your name Father, I love you. Ni Wayan Cantika Wulan Sar (who lost her father in the first Bali bombings at the age of 1) What are the reasons for a terrorist attack to become an event of commemoration? In order for people to use memory as a reference for time and space it is necessary to attach meaning to these memories. However no meaning can be attributed to an event of extreme violence as it cannot be attributed to common ethics and values and thus it cannot turn into discursive memory (Argenti & Schramm, 2010). Holocaust survivors describe their time in concentration camps as a counter-time, a time that hinders “normal progress through ‘ordinary’ time” (Argenti & Schramm, 2010, p. 10). A holocaust survivor describes that she does not live with Auschwitz, but it lives with her, this demonstrates how the memory is out of her control, but at the same time a part of her (Argenti & Schramm, 2010). Derrida uses the analogy of a ‘crypt’ to refer to deep memory as “a place hidden within or beneath another place, a place complete unto itself, but closed off from that outside itself of which it is nevertheless an inherent part”, in other words by trying to silence traumatic memory someone does not manage to simply forget the event but instead stores it deep inside oneself (Argenti & Schramm, 2010, pp. 11-2). ‘Normal’ memory generally is subject to introjection, however, traumatic memory may stay incorporated and fail to achieve a process of introjection (Argenti & Schramm, 2010). In case of the death of a close family member that dead can “become the living-dead inside oneself” (Argenti & Schramm, 2010, p. 12). The book Remembering Josh: Bali, a Father’s Story (Deegan, 2004) demonstrates how a violent experience stays with a person, as in this case the father who cannot deal with his son’s loss: "I've not slept for 70 hours or more, walking, watching, waiting, praying for the end of this nightmare from which, at some stage, I must awake. But the reality is beginning to set in and I know only too well that at least in this life I shall never speak openly with my son. Never again shall I laugh with him, drink with him, discuss his future or watch him take to the field.' So begins a father's descent into hell. On 12 October 2002, Brian Deegan's son Joshua was killed in the terrorist explosion that ripped apart the Sari Club in Bali's Kuta Beach. Through grief and anger, this father has gone on a journey no parent should have to take. He has confronted the ghost's of his son's death, challenged the government's version of the truth and fought for the answers nobody wanted to give ..." (Deegan, 2004) This example shows how memory of a close family member can lead to incorporated memory, especially when Deegan describes his inner unrest shortly after the event. Incorporated memory can ’haunt’ a person (Argenti & Schramm, 2010); as “[h]e [the father] has confronted the ghost's of his son's death.” Furthermore there is a lack of meaning to the event and why his son had to die that night as he later does not accept the official narrative of the happenings of the night but he looks for and tries to create counter-memory of the bombings. Various forms of commemorating violent events can offer a platform for processing the returning experience and turn it into discursive memory. The memory of the Bali Bombings still seems incomprehensible and somehow unfinished in the minds of people affected by it. “Takako Suzuki, who lost her son, Keo Kosuke Suzuki, and her daughter-in-law, Yuka Suzuki, said she still feels sad and angry” (Nurhayati, 2010). Also Ni Nyoman Rencin said with tears in her eyes: “I have to keep my strength to live my life. Sometimes I feel that I’m strong enough to live without my husband, but my heart cannot lie” (Ni Komang Erviani, 2010). Both examples show how the two women are still affected by their losses, how the memory of their loved ones still haunts them and has an influence on their current life. While these two women use a space of commemoration in Bali, others decided to write down their stories and make them accessible to the public.
https://www.grin.com/document/280167
In the current media ecology, audiences are constantly tempted by many types of content scattered across connected platforms. Since cultural goods consumption is a practice that now takes place in a constant flow across different platforms, news and documentary narratives must take advantage of the malleability of digital language to engage citizens. Narratives change according to the dominant intellectual technology of the time. In this way, oral narratives are different from printed media and the transmedia storytelling that digital communication promotes. DocuMedia: Social Media Journalism is a series of interactive documentaries developed in Argentina at Rosario National University to bring users new narratives of local interest around journalistic research topics. DocuMedia is the result of crossing documentary, investigative journalism, and data journalism techniques with a focus on users’ participation and the expansion of narrative plots. DocuMedia projects are an example of location-based storytelling, that is, a narrative that stems from hyperlocal space and place and operates as a device of constant social reconstruction. In these experiences, memory is understood as the meanings that citizens share and, above all, develop as a social practice, through which identity is expressed and shaped. The fifth DocuMedia project, Women for Sale: Human Trafficking with Sexual Exploitation in Argentina, was launched in 2015 and took on the challenge of making the leap from multimedia journalism to transmedia journalism. The transmedia framework for Women for Sale included a webdoc, or interactive multimedia documentary, a serial graphic novel of five episodes (print and digital version), posters on the street with augmented reality interaction, short videos projected on indoor and outdoor LED screens, television spots, a collaborative map, a television documentary, mobisodes, the e-book What Happens Next? Contributions and Challenges for the Reconstruction of Rights of Trafficking and Sexual Exploitation Victims, and a social media strategy designed to share information about trafficking in Argentina and to call community to action. Article Silvana Comba, Edgardo Toledo, Anahí Lovato, and Fernando Irigaray Article Alvaro Liuzzi and Tomás Bergero Trpin The Malvinas War, also known in Spanish as the South Atlantic Conflict (conflicto del Atlántico Sur), was a war between Argentina and the United Kingdom that took place in the Malvinas Islands, South Georgia, and South Sandwich between April 2 and June 14, 1982. During 2012, thirty years after the conflict, the Malvinas/30 web documentary was produced in Argentina, conceived as a transmedia production in real time. It was designed to serve as a space of collective digital memory that would involve users and recreate on social networks the hostile atmosphere of the South Atlantic Islands at the time of the skirmish. The documentary, produced by an interdisciplinary team, was developed as a continuous interactive production for five months that, by extending its narrative through different digital platforms, sought to allow users to relive the events of the Malvinas War as they had occurred three decades before in 1982. To meet this goal, Malvinas/30 was organized along three central axes: narrative synchronization between past and present (telling the story as if it were happening today); unfolding the story on different media (social networks, traditional media, and other media); and generating interactive responses from users (a collective story as a space for historical memory).
https://oxfordre.com/latinamericanhistory/search?btog=chap&f_0=keyword&q_0=interactive%20documentary
Originally from Newfoundland, Glenn Gear lives and works in Montreal. A graduate of Memorial University of Newfoundland, he is currently completing a Master's degree in sculpture at the University. lt is my hope that the collections of objects and images will reflect my personal feelings of the (queer) narratives that are often woven into the land, and also articulate broader social and cultural issues of territory and collective memory. G.G. Through his artistic production, Glenn Gear seeks to develop various strategies in order to inscribe his subjectivity within a reflection on the notions of landscape, territory and land. The concept of memory as a representation and cultural object, in relation to the idea of territory, creates, for the artist, a work space conducive to the elaboration of new narratives. Coming from a specific story and place, these narratives with an intimate and playful character circulate inside objects that appeal to desire, memory and imagination. This circulation also proposes a reflection on certain collective questions relating to sexual identity and culture. Stepping into the exhibition leads us to evolve alongside objects and images that co-exist as strange winks to territory and identity, charged with both humour and nostalgia. Some of the objects are sections of landscape constructed in the manner of scale models, others take on the appearance of travel souvenirs to finally form a set of relationships that lead us not only to rethink our relationship to the land, but also to history and maritime imagery as having homoerotic qualities. The representation of the land of origin, marked by a romantic approach to the territory, turned away by scale plays and by childish, decorative or tasteless aesthetics, allows the artist to materialize his interest in the blending of reality and fantasy, of lived bodily experience and imagination.
http://galerieb312.ca/en/programmation/glenn-gear-quiet-nook-un-coin-tranquille
Historical fantasy - a genre that blends historical reality with elements impossible in their historical periods, such as magic or preposterously advanced technology - affords us new ways of understanding the processes behind the constant remediation of cultural memory by accepting a narrative logic that overtly rejects the paradigm of historical verisimilitude. In doing so, it allows for an imaginative engagement with the past that is open to radical transformation. Such profound alterations of historical events can also serve to interrogate the grand narratives often associated with them by revealing different, perhaps disturbing potentialities - what could have preferably happened and what has thankfully not.
https://arsdocendi.ro/t_carte.php?id_carte=179
As a result, this version of our reality has been evolving through an extreme imbalance between consciousness-regressive and consciousness-progressive consciousness, with regressive consciousness being the dominant creational mode. Therein lies the ‘why’ behind the one (or more) ‘subsequent’ consciousness-regressive cycles of evolution through which our reality has unfolded. Both progressive-dominant and regressive-dominant evolution cycles are “venues” through which consciousness, via its myriad expressions, seeks to balance those [two] fundamental aspects of itself. When the collective consciousness and creational energies of a [physical] reality have become either progressive or regressive to the “point of no return”, so to speak, during the course of one or more evolution cycles, that reality, as it was/is, “ceases to exist” when the overarching cycle of "the cycles within the cycle" comes to an end. Either a new version or versions of that reality [begin] unfolding, or its "remnant" energies are transmuted [back] into unexpressed consciousness for unfoldment into new/different expressions of consciousness. Whether the consciousness/energies of an individual being within that reality are transmuted or proceed to a higher-level, equivalent or lower-level density/dimension/reality as another version of Core Self depends upon a combination of many factors. Two of those factors are the individual’s levels of potential (consciousness "units") and vital energy, and to what extent the individual has "progressed" (whether through consciousness-progressive or consciousness-regressive consciousness) during the course of that evolution cycle. Ultimately, Core-Self/Whole of Consciousness has the final “say” in the matter. Most, if not all, of the higher-density beings at the pinnacle of the pyramid of control within this reality at this time are very much “in the flow” of their consciousness-regressive roles. As such, they have created, become masters of and constantly work to refine and upgrade myriad tools of manipulation and control designed expressly for the purpose of steering this reality’s “occupants” toward consciousness-regressive creation via their thoughts, emotions and behaviors. The following 4 mechanisms are key prongs in the ruling class’s far-reaching system of manipulation. These disorders handily seduce then ensnare us in egoic traps of consciousness-regressive thought, emotional and behavioral patterns, ensuring our vulnerability to virtually all types of manipulation. As is the case with all tools of manipulation, self-importance is an ego-centric, thus fear-driven disorder. Those seeking to remove all traces of this disorder from the mind-body complex must come to understand that self-importance has two faces. Its easily recognizable face comprises feelings and attitudes of arrogance, superiority and conceit, in hand with an excessively high regard for one’s significance, circumstances and/or station in life. Its less familiar face comprises feelings and attitudes of meekness, inferiority and servility, in hand with an excessively low regard for oneself, circumstances and/or station in life. Those who believe others are more important or valuable than themselves are also operating (and thus creating) through the consciousness-regressive self-importance disorder. From the moment we come into this reality, we are tended to by parents/family, caregivers, teachers and other authority figures in ways (due to their own conditioning) that overly develop ego and lead us to become egoic self-identified. Because we are not told and do not remember what we really are, we start taking on distortions (false beliefs) about ourselves that we are somehow either better than others, not as good as others, or some combination of the two. These distortions promote and ultimately instill within us at a deep subconscious level the false belief that ego is, and is meant to be, the guiding force in our lives. When we believe we are either superior or inferior to another/others from one moment to the next, we are operating through the self-importance disorder. Our lives become an ongoing stream of situations, circumstances and interactions with others that trigger within us either “better than” or “less than” thoughts and feelings about ourselves and/or others. These consciousness regressive-imbalanced thoughts and feelings lead to corresponding regressive actions and behaviors. Until we fully remove the self-importance distortion, we continue to create, at least to some extent, through regressive consciousness. A multitude of manipulative techniques are used by the consciousness-regressive ruling class to engender confusion and misunderstandings around the differences between self-importance and real self-love and self-respect. Real self-love is the complete and utter acceptance of yourself just as you are in this moment. Self-acceptance does not mean you don’t see or acknowledge that you might need to make certain thought, feeling or behavioral adjustments as you proceed on your journey of awakening. It means that despite your awareness of the need to make certain adjustments or corrections, you’re also aware of, understand and respect your value as an expression of consciousness, simply because you are an expression of consciousness. You embody the knowledge that you are in no way less (or more) valuable than any other expression—regardless of the progress you have made or have yet to make during the course of your journey. Once the self-importance disorder is firmly-established (usually between the ages of 5 and 9) and the individual routinely functions through fear-based, ego-directed patterns of thought, emotion and behavior, he/she becomes by default highly-susceptible to all subsequent ego-centric psychologically- and emotionally-manipulative techniques used by the ruling class to further seduce and more-deeply entrench him/her in consciousness-regressive patterns of creation. The pleasure principle technique is a cornerstone of the mind control system. Massive amounts of highly-seductive propaganda have convinced a staggering number of the population that the bottom-line reason for existence is to fulfill as many [egoic] desires as possible during this lifetime. As long as they're getting what they need or want, these individuals are content with little or no [real] compassion for the suffering of others. Consciousness, by way of its expressions, persistently coaxes in gentle ways those versions of Core-Self intended for [predominantly] consciousness-progressive creation within this reality at this time. But when the wake-up calls are resisted, ignored or not recognized as being such, those who have been nudged for an extended period of time can suddenly find themselves neck deep in a life-altering event or series of events that cause tremendous, seemingly insurmountable personal suffering. Truth is truth—regardless of how it "looks" or makes us feel. This disorder that leads individuals to determine the validity of information based upon its “attractiveness" or "ugliness," or how it makes them feel is extremely pervasive because it has been instilled by way of techniques that play into and prey upon [egoic] emotional preferences. This stimulus-response entrainment of society—achieved primarily via mainstream media and "education" systems of Western culture—is a highly-effective means of helping the ruling class fulfill their agenda to regress the consciousness of all within this reality at this time, especially those whose purpose is to bring balance by way of consciousness-progressive creation. Recipients of stimulus-response entrainment are programmed to have predetermined emotional reactions to a wide range of carefully-selected trigger words, phrases, imagery, situations and events. Once an individual is operating from within his/her particular level of psychological "comfort" based in what they feel is true in the world (their perceptions of reality), they are highly averse to hearing anything that jolts them out of those "comfort zones"—even when inner knowing is “telling” them it is the truth. They have been so thoroughly conditioned to accept and function through the "perception is reality" distortion that this state of being is the only "space" in which they feel comfortable. Certain neural patterns that evoke specific [corresponding] emotional patterns have become hardwired in their brains. When confronted with information that elicits feelings which conflict with the hardwired emotional dynamics of their conditioning, these individuals experience a further mind-warping psychological and emotional disorder. The resistance-to-truth form of emotional manipulation gives rise to a psychological condition exhibiting characteristics of apathy, laziness and willfully ignoring reality. Another even more-harmful psychological disorder serves as the glue that essentially holds in place those three self-destructive psychological dynamics, leading in turn to more and more suffering. This disorder is known as cognitive dissonance. Cognitive dissonance causes extreme mental distress and discomfort as a result of lying to oneself, even in the face of abundant evidence that directly contradicts one's currently held beliefs, ideologies or concepts. Mental distress is experienced because the individual is attempting to function through two or more contradictory beliefs, ideologies or concepts simultaneously. Cognitive dissonance is seeing with your own eyes in the world around you the resultant consciousness-regressive consequences of certain actions and behaviors, then deliberately choosing not to acknowledge or accept what you see. Many with this disorder readily admit that even if they were to learn that the truth behind a major world event was in complete opposition to what had been presented to the public, they would simply choose to ignore that truth because it wouldn't be reflective of the kind of world in which they want to live. Cognitive dissonance is the intentional, deliberate ignoring of reality. It's an essential component in effective mind control because it provides a foundational fixative to which all forms of mind control can easily attach themselves and continually self-disseminate. In other words, cognitive dissonance enables the "stickiness" and ensuing virus-like spread of all methods of mind control within not only the individual, but throughout society. Cognitive dissonance is a serious psychological disorder. Those who deliberately ignore that which is are highly susceptible to believing anything and therefore, to other forms of mind control. Those who are afflicted become self-appointed arbiters of truth, granting themselves the false "right" to determine what reality is, based solely on their [egoic] likes or dislikes and what they're most comfortable accepting as truth. Cognitive dissonance and solipsism—the ideology which asserts the false notions (distortions) that there is no way to actually know the truth and that perception is reality—are the "dynamic duo" of the ruling class’s mind control methodologies. When both are in place within the mind-body complex, the ruling class’s consciousness-regressive agendas are advanced for them by these individuals. Cognitive dissonance is a form of escapism that does not work. It is a completely fear-driven and fear-sustained disorder that shuts down healthy mind-body complex functioning in those who are afflicted. When it is not identified and cleared from the mind-body complex, this disorder severely limits access to inner knowing and can completely block the inflow of the higher-order knowledge and will of Core Self. Afflicted individuals become deeply mired in consciousness-regressive patterns of thought, feeling and behavior, even when this is in complete opposition to Core-Self desire for this version of itself. The result? These individuals are miserable at a very deep, subconscious level causing them to suffer immensely. Even those who know they are on a conscious journey of awakening and have no conscious desire to do so, bring their misery and suffering into the world through their [subconscious] thought, feeling and behavioral energy emanations/creations. Ultimately, they simply fall into the flow of the currently dominant consciousness-regressive state of consciousness and creation in this reality, even when that state isn’t aligned with the individual’s purpose for being here at this time.
https://www.restoration-activationproject.com/awakening-process-articles/4-disorders-that-keep-us-trapped-in-consciousness-regressive-patterns-of-creation
Abstract:Video games have reached a point of huge commercial success as well as wide familiarity with audiences both young and old. Much attention and research have also been directed towards serious games and their potential learning affordances. It is little surprise that the field of virtual heritage has taken a keen interest in using serious games to present cultural heritage information to users, with applications ranging from museums and cultural heritage institutions, to academia and research, to schools and education. Many researchers have already documented their efforts to develop and distribute virtual heritage serious games. Although attempts have been made to create classifications of the different types of virtual heritage games (somewhat akin to the idea of game genres), no formal taxonomy has yet been produced to define the different types of cultural heritage and historical information that can be presented through these games at a content level, and how the information can be manifested within the game. This study proposes such a taxonomy. First the informational content is categorized as heritage or historical, then further divided into tangible, intangible, natural, and analytical. Next, the characteristics of the manifestation within the game are covered. The means of manifestation, level of demonstration, tone, and focus are all defined and explained. Finally, the potential learning outcomes of the content are discussed. A demonstration of the taxonomy is then given by describing the informational content and corresponding manifestations within several examples of virtual heritage serious games as well as commercial games. It is anticipated that this taxonomy will help designers of virtual heritage serious games to think about and clearly define the information they are presenting through their games, and how they are presenting it. Another result of the taxonomy is that it will enable us to frame cultural heritage and historical information presented in commercial games with a critical lens, especially where there may not be explicit learning objectives. Finally, the results will also enable us to identify shared informational content and learning objectives between any virtual heritage serious and/or commercial games. Keywords: informational content, serious games, taxonomy, virtual heritageProcedia PDF Downloads 269 1492 A Multi-Modal Virtual Walkthrough of the Virtual Past and Present Based on Panoramic View, Crowd Simulation and Acoustic Heritage on Mobile Platform Authors: Lim Chen Kim, Tan Kian Lam, Chan Yi Chee Abstract:This research presents a multi-modal simulation in the reconstruction of the past and the construction of present in digital cultural heritage on mobile platform. In bringing the present life, the virtual environment is generated through a presented scheme for rapid and efficient construction of 360° panoramic view. Then, acoustical heritage model and crowd model are presented and improvised into the 360° panoramic view. For the reconstruction of past life, the crowd is simulated and rendered in an old trading port. However, the keystone of this research is in a virtual walkthrough that shows the virtual present life in 2D and virtual past life in 3D, both in an environment of virtual heritage sites in George Town through mobile device. Firstly, the 2D crowd is modelled and simulated using OpenGL ES 1.1 on mobile platform. The 2D crowd is used to portray the present life in 360° panoramic view of a virtual heritage environment based on the extension of Newtonian Laws. Secondly, the 2D crowd is animated and rendered into 3D with improved variety and incorporated into the virtual past life using Unity3D Game Engine. The behaviours of the 3D models are then simulated based on the enhancement of the classical model of Boid algorithm. Finally, a demonstration system is developed and integrated with the models, techniques and algorithms of this research. The virtual walkthrough is demonstrated to a group of respondents and is evaluated through the user-centred evaluation by navigating around the demonstration system. The results of the evaluation based on the questionnaires have shown that the presented virtual walkthrough has been successfully deployed through a multi-modal simulation and such a virtual walkthrough would be particularly useful in a virtual tour and virtual museum applications. Keywords: Boid Algorithm, Crowd Simulation, Mobile Platform, Newtonian Laws, Virtual HeritageProcedia PDF Downloads 185 1491 Virtual and Augmented Reality Based Heritage Gamification: Basilica of Smyrna in Turkey Authors: Tugba Saricaoglu Abstract:This study argues about the potential representation and interpretation of Basilica of Smyrna through gamification. Representation can be defined as a key which plays a role as a converter in order to provide interpretation of something according to the person who perceives. Representation of cultural heritage is a hypothetical and factual approach in terms of its sustainable conservation. Today, both site interpreters and public of cultural heritage have varying perspectives due to their different demographic, social, and even cultural backgrounds. Additionally, gamification application offers diversion of methods suchlike video games to improve user perspective of non-game platforms, contexts, and issues. Hence, cultural heritage and video game decided to be analyzed. Moreover, there are basically different ways of representation of cultural heritage such as digital, physical, and virtual methods in terms of conservation. Virtual reality (VR) and augmented reality (AR) technologies are two of the contemporary digital methods of heritage conservation. In this study, 3D documented ruins of the Basilica will be presented in the virtual and augmented reality based technology as a theoretical gamification sample. Also, this paper will focus on two sub-topics: First, evaluation of the video-game platforms applied to cultural heritage sites, and second, potentials of cultural heritage to be represented in video game platforms. The former will cover the analysis of some case(s) with regard to the concepts and representational aspects of cultural heritage. The latter will include the investigation of cultural heritage sites which carry such a potential and their sustainable conversation. Consequently, after mutual collection of information from cultural heritage and video game platforms, a perspective will be provided in terms of interpretation of representation of cultural heritage by sampling that on Basilica of Smyrna by using VR and AR based technologies. Keywords: Basilica of Smyrna, cultural heritage, digital heritage, gamificationProcedia PDF Downloads 370 1490 Potentials for Learning History through Role-Playing in Virtual Reality: An Exploratory Study on Role-Playing on a Virtual Heritage Site Authors: Danzhao Cheng, Eugene Ch'ng Abstract:Virtual Reality technologies can reconstruct cultural heritage objects and sites to a level of realism. Concentrating mostly on documenting authentic data and accurate representations of tangible contents, current virtual heritage is limited to accumulating visually presented objects. Such constructions, however, are fragmentary and may not convey the inherent significance of heritage in a meaningful way. In order to contextualise fragmentary historical contents where history can be told, a strategy is to create a guided narrative via role-playing. Such an approach can strengthen the logical connections of cultural elements and facilitate creative synthesis within the virtual world. This project successfully reconstructed the Ningbo Sanjiangkou VR site in Yuan Dynasty combining VR technology and role-play game approach. The results with 80 pairs of participants suggest that VR role-playing can be beneficial in a number of ways. Firstly, it creates thematic interactivity which encourages users to explore the virtual heritage in a more entertaining way with task-oriented goals. Secondly, the experience becomes highly engaging since users can interpret a historical context through the perspective of specific roles that exist in past societies. Thirdly, personalisation allows open-ended sequences of the expedition, reinforcing user’s acquisition of procedural knowledge relative to the cultural domain. To sum up, role-playing in VR poses great potential for experiential learning as it allows users to interpret a historical context in a more entertaining way. Keywords: experiential learning, maritime silk road, role-playing, virtual heritage, virtual realityProcedia PDF Downloads 58 1489 Modelling Medieval Vaults: Digital Simulation of the North Transept Vault of St Mary, Nantwich, England Authors: N. Webb, A. Buchanan Abstract:Digital and virtual heritage is often associated with the recreation of lost artefacts and architecture; however, we can also investigate works that were not completed, using digital tools and techniques. Here we explore physical evidence of a fourteenth-century Gothic vault located in the north transept of St Mary’s church in Nantwich, Cheshire, using existing springer stones that are built into the walls as a starting point. Digital surveying tools are used to document the architecture, followed by an analysis process to hypothesise and simulate possible design solutions, had the vault been completed. A number of options, both two-dimensionally and three-dimensionally, are discussed based on comparison with examples of other contemporary vaults, thus adding another specimen to the corpus of vault designs. Dissemination methods such as digital models and 3D prints are also explored as possible resources for demonstrating what the finished vault might have looked like for heritage interpretation and other purposes. Keywords: digital simulation, heritage interpretation, medieval vaults, virtual heritage, 3d scanningProcedia PDF Downloads 210 1488 Complex Technology of Virtual Reconstruction: The Case of Kazan Imperial University of XIX-Early XX Centuries Authors: L. K. Karimova, K. I. Shariukova, A. A. Kirpichnikova, E. A. Razuvalova Abstract:This article deals with technology of virtual reconstruction of Kazan Imperial University of XIX - early XX centuries. The paper describes technologies of 3D-visualization of high-resolution models of objects of university space, creation of multi-agent system and connected with these objects organized database of historical sources, variants of use of technologies of immersion into the virtual environment. Keywords: 3D-reconstruction, multi-agent system, database, university space, virtual reconstruction, virtual heritageProcedia PDF Downloads 191 1487 Absent Theaters: A Virtual Reconstruction from Memories Authors: P. Castillo Muñoz, A. Lara Ramírez Abstract:Absent Theaters is a project that virtually reconstructs three theaters that existed in the twentieth century, demolished in the city of Medellin, Colombia: Circo España, Bolívar, and Junín. Virtual reconstruction is used as an excuse to talk with those who lived in their childhood and youth cultural spaces that formed a whole generation. Around 100 people who witnessed these theaters were interviewed. The means used to perform the oral history work was the virtual reconstruction of the interior of the theaters that were presented to the interviewees through the Virtual Reality glasses. The voices of people between 60 and 103 years old were used to generate a transmission of knowledge to the new generations about the importance of theaters as essential places for the city, as spaces generating social relations and knowledge of other cultures. Oral stories about events, the historical and social context of the city, were mixed with archive images and animations of the architectural transformations of these places. Oral stories about events, the historical and social context of the city, were mixed with archive images and animations of the architectural transformations of these places, with the purpose of compiling a collective discourse around cultural activities, heritage, and memory of Medellin. Keywords: culture, heritage, oral history, theaters, virtual realityProcedia PDF Downloads 54 1486 Managing Virtual Teams in a Pandemic Authors: M. Jafari Toosy, A. Zamani Abstract:This article, considering the result of pandemics at the international level and all activities and projects performed virtually and the need for resource management and virtual teams in this period identifies the components of virtual management after searching the available resources. Exploration of virtual management in the pandemic era is explored in 10 international articles. The results of research with this method and according to the tasks and topics related to management knowledge and definition of virtual teams can be divided into topics such as planning, decision making, control, organization, leadership, attention to growth and capability, resources and facilities, Communication, creativity, innovation and security. In order to explain the nature of virtual management, a definition of virtual management was provided. Keywords: management, virtual, virtual team management, pandemic, teamProcedia PDF Downloads 91 1485 Linguistic Attitudes and Language Learning Needs of Heritage Language Learners of Spanish in the United States Authors: Sheryl Bernardo-Hinesley Abstract:Heritage language learners are students who have been raised in a home where a minority language is spoken, who speaks or merely understand the minority heritage language, but to some degree are bilingual in the majority and the heritage language. In view of the rising university enrollment by Hispanics in the United States who have chosen to study Spanish, university language programs are currently faced with challenges of accommodating the language needs of heritage language learners of Spanish. The present study investigates the heritage language perception and language attitudes by heritage language learners of Spanish, as well as their classroom language learning experiences and needs. In order to carry out the study, a qualitative survey was used to gather data from university students. Analysis of students' responses indicates that heritage learners are motivated to learn the heritage language. In relation to the aspects of focus of a language course for heritage learners, results show that the aspects of interest are accent marks and spelling, grammatical accuracy, vocabulary, writing, reading, and culture. Keywords: heritage language learners, language acquisition, linguistic attitudes, Spanish in the USProcedia PDF Downloads 85 1484 Heritage Tree Expert Assessment and Classification: Malaysian Perspective Authors: B.-Y.-S. Lau, Y.-C.-T. Jonathan, M.-S. Alias Abstract:Heritage trees are natural large, individual trees with exceptionally value due to association with age or event or distinguished people. In Malaysia, there is an abundance of tropical heritage trees throughout the country. It is essential to set up a repository of heritage trees to prevent valuable trees from being cut down. In this cross domain study, a web-based online expert system namely the Heritage Tree Expert Assessment and Classification (HTEAC) is developed and deployed for public to nominate potential heritage trees. Based on the nomination, tree care experts or arborists would evaluate and verify the nominated trees as heritage trees. The expert system automatically rates the approved heritage trees according to pre-defined grades via Delphi technique. Features and usability test of the expert system are presented. Preliminary result is promising for the system to be used as a full scale public system. Keywords: arboriculture, Delphi, expert system, heritage tree, urban forestryProcedia PDF Downloads 202 1483 Study and Conservation of Cultural and Natural Heritages with the Use of Laser Scanner and Processing System for 3D Modeling Spatial Data Authors: Julia Desiree Velastegui Caceres, Luis Alejandro Velastegui Caceres, Oswaldo Padilla, Eduardo Kirby, Francisco Guerrero, Theofilos Toulkeridis Abstract:It is fundamental to conserve sites of natural and cultural heritage with any available technique or existing methodology of preservation in order to sustain them for the following generations. We propose a further skill to protect the actual view of such sites, in which with high technology instrumentation we are able to digitally preserve natural and cultural heritages applied in Ecuador. In this project the use of laser technology is presented for three-dimensional models, with high accuracy in a relatively short period of time. In Ecuador so far, there are not any records on the use and processing of data obtained by this new technological trend. The importance of the project is the description of the methodology of the laser scanner system using the Faro Laser Scanner Focus 3D 120, the method for 3D modeling of geospatial data and the development of virtual environments in the areas of Cultural and Natural Heritage. In order to inform users this trend in technology in which three-dimensional models are generated, the use of such tools has been developed to be able to be displayed in all kinds of digitally formats. The results of the obtained 3D models allows to demonstrate that this technology is extremely useful in these areas, but also indicating that each data campaign needs an individual slightly different proceeding starting with the data capture and processing to obtain finally the chosen virtual environments. Keywords: laser scanner system, 3D model, cultural heritage, natural heritageProcedia PDF Downloads 183 1482 Community Development and Preservation of Heritage in Igbo Area of Nigeria Authors: Elochukwu A. Nwankwo, Matthias U. Agboeze Abstract:Many heritage sites abound in the shores of Nigeria with enormous tourism potentials. Heritage sites do not only depict the cultural and historical transmutation of people but also functions in the image design and promotion of a locality. This reveals the unique role of heritage sites to structural development of an area. Heritage sites have of recent been a victim of degradation and social abuse arising from seasonal ignorance; hence minimizing its potentials to the socio-economic development of an area. This paper is emphasizing on the adoption of community development approaches in heritage preservation in Igbo area. Its modalities, applications, challenges and prospect were discussed. Such understanding will serve as a catalyst in aiding general restoration and preservation of heritage sites in Nigeria and other African states. Keywords: heritage resources, community development, preservation, sustainable development, approachesProcedia PDF Downloads 224 1481 Fort Conger: A Virtual Museum and Virtual Interactive World for Exploring Science in the 19th Century Authors: Richard Levy, Peter Dawson Abstract:Ft. Conger, located in the Canadian Arctic was one of the most remote 19th-century scientific stations. Established in 1881 on Ellesmere Island, a wood framed structure established a permanent base from which to conduct scientific research. Under the charge of Lt. Greely, Ft. Conger was one of 14 expeditions conducted during the First International Polar Year (FIPY). Our research project “From Science to Survival: Using Virtual Exhibits to Communicate the Significance of Polar Heritage Sites in the Canadian Arctic” focused on the creation of a virtual museum website dedicated to one of the most important polar heritage site in the Canadian Arctic. This website was developed under a grant from Virtual Museum of Canada and enables visitors to explore the fort’s site from 1875 to the present, http://fortconger.org. Heritage sites are often viewed as static places. A goal of this project was to present the change that occurred over time as each new group of explorers adapted the site to their needs. The site was first visited by British explorer George Nares in 1875 – 76. Only later did the United States government select this site for the Lady Franklin Bay Expedition (1881-84) with research to be conducted under the FIPY (1882 – 83). Still later Robert Peary and Matthew Henson attempted to reach the North Pole from Ft. Conger in 1899, 1905 and 1908. A central focus of this research is on the virtual reconstruction of the Ft. Conger. In the summer of 2010, a Zoller+Fröhlich Imager 5006i and Minolta Vivid 910 laser scanner were used to scan terrain and artifacts. Once the scanning was completed, the point clouds were registered and edited to form the basis of a virtual reconstruction. A goal of this project has been to allow visitors to step back in time and explore the interior of these buildings with all of its artifacts. Links to text, historic documents, animations, panorama images, computer games and virtual labs provide explanations of how science was conducted during the 19th century. A major feature of this virtual world is the timeline. Visitors to the website can begin to explore the site when George Nares, in his ship the HMS Discovery, appeared in the harbor in 1875. With the emergence of Lt Greely’s expedition in 1881, we can track the progress made in establishing a scientific outpost. Still later in 1901, with Peary’s presence, the site is transformed again, with the huts having been built from materials salvaged from Greely’s main building. Still later in 2010, we can visit the site during its present state of deterioration and learn about the laser scanning technology which was used to document the site. The Science and Survival at Fort Conger project represents one of the first attempts to use virtual worlds to communicate the historical and scientific significance of polar heritage sites where opportunities for first-hand visitor experiences are not possible because of remote location. Keywords: 3D imaging, multimedia, virtual reality, arcticProcedia PDF Downloads 316 1480 The World Heritage List: A Big Data Spatial Econometrics Approach to Sites Promoting the Brand Authors: David Wuepper, Marc Patry Abstract:UNESCO’s World Heritage program requests the inscribed locations to promote the World Heritage brand by clearly presenting information about it on-site. Based on feedback from over 319,000 visitors at 791 locations, we create an index that shows how much the World Heritage sites actually brand themselves as such. We find great heterogeneity throughout the list and explain this econometrically mostly with the economic benefit for the sites but also with cultural brand preferences, which are highest in Asia, followed by Europe and North America. We also find a positive relationship between World Heritage branding and conservation status and a U-shaped relationship between visitor numbers and WH branding. Based on our findings, we recommend to make clear World Heritage branding mandatory for all sites. Keywords: UNESCO World Heritage, collective brand, cultural tourism, heritage conservation, brand equity, spatial econometricsProcedia PDF Downloads 433 1479 The Antecedents of Continued Usage on Social-Oriented Virtual Communities Based on Automaticity Mechanism Authors: Hsiu-Hua Cheng Abstract:In recent years, the number of social-oriented virtual communities users has increased significantly. Corporate investment in advertising on social-oriented virtual communities increases quickly. With the gigantic commercial value of the digital market, competitions between virtual communities are keen. In this context, how to retain existing customers to continue using social-oriented virtual communities is an urgent issue for virtual community managers. This study employs the perspective of automaticity mechanism and combines the social embeddedness theory with the literature of involvement and habit in order to explore antecedents of users’ continuous usage on social-oriented virtual communities. The results can be a reference for scholars and managers of social-oriented virtual communities. Keywords: continued usage, habit, social embeddedness, involvement, virtual communityProcedia PDF Downloads 316 1478 Cultural Heritage Management and Tourism in Kosovo Authors: Valon Shkodra Abstract:In our paper, we will give an overview of the cultural heritage and tourism in Kosovo. Kosovo has a history, culture, tradition and architecture that are different from those of other countries in the region, and each country has its own characteristics and peculiarities. In this paper, we will mainly present the situation of cultural heritage and its interpretation. The research is based on fieldwork and the aim of the research is to live the situation of cultural heritage and tourism. The reason why we chose this topic is that cultural heritage and tourism are now the most important industry developing many countries in the world. Besides the benefits that tourism brings, it also has an impact on the preservation, protection and promotion of culture in general. Kosovo, with its cultural diversity and very good geographical location, is also very well suited to develop these two areas as a bridge to each other. The cultural heritage holds traces from the earliest eras and shows a diversity of different civilizations that have just begun to be explored and presented. Keywords: cultural heritage, economy, tourism, development, institutions, protectionProcedia PDF Downloads 68 1477 Blending Values for Historic Neighborhood Upliftment: Case of Heritage Hotel in Ahmedabad Authors: Vasudha Saraogi Abstract:Heritage hotels are architectural marvels and embody a number of values of heritage discourses within them. The adaptive re-use of old structures to make them commercially viable as heritage hotels, not only boosts tourism and the local economy but also brings in development for the neighborhood in which it is located. This paper seeks to study the value created by heritage hotels in general and French Haveli (Ahmedabad) in particular using the single case study methodology. The paper draws upon the concept of the Italian model of Albergo Diffuso and its implementation via French Haveli, for value creation and development in Dhal Ni Pol (a historic neighborhood) while recognizing the importance of stakeholders to the process of the historic neighborhood upliftment. Keywords: heritage discourses, historic neighborhoods, heritage hotel, Old City AhmedabadProcedia PDF Downloads 65 1476 Arts and Cultural Heritage Digitalization in Nigeria: Problems and Prospects Authors: Okechukwu Uzoma Nkwocha, Edward Uche Omeire Abstract:Information and communication technologies (ICT) undeniably, have expanded the sphere of arts and creativity. It proves to be an important tool for production, preservation, sharing and utilization of arts and cultural heritage. While art and heritage institutions around the globe are increasingly utilizing ICT for the promotion and sharing of their collections, the story seems different in most part of Africa. In this paper, we will examine the prospects and problems of utilizing ICT in promotion, preservation and sharing of arts and cultural heritage. Keywords: arts, cultural heritage, digitalization, ICTProcedia PDF Downloads 63 1475 Management of Cultural Heritage: Bologna Gates Authors: Alfonso Ippolito, Cristiana Bartolomei Abstract:A growing demand is felt today for realistic 3D models enabling the cognition and popularization of historical-artistic heritage. Evaluation and preservation of Cultural Heritage is inextricably connected with the innovative processes of gaining, managing, and using knowledge. The development and perfecting of techniques for acquiring and elaborating photorealistic 3D models, made them pivotal elements for popularizing information of objects on the scale of architectonic structures. Keywords: cultural heritage, databases, non-contact survey, 2D-3D modelsProcedia PDF Downloads 228 1474 Traditional Management Systems and the Conservation of Cultural and Natural Heritage: Multiple Case Studies in Zimbabwe Authors: Nyasha Agnes Gurira, Petronella Katekwe Abstract:Traditional management systems (TMS) are a vital source of knowledge for conserving cultural and natural heritage. TMS’s are renowned for their ability to preserve both tangible and intangible manifestations of heritage. They are a construct of the intricate relationship that exists between heritage and host communities, where communities are recognized as owners of heritage and so, set up management mechanisms to ensure its adequate conservation. Multiple heritage condition surveys were conducted to assess the effectiveness of using TMS in the conservation of both natural and cultural heritage. Surveys were done at Nharira Hills, Mahwemasimike, Dzimbahwe, Manjowe Rock art sites and Norumedzo forest which are heritage places in Zimbabwe. It assessed the state of conservation of the five case studies and assessed the role that host communities play in the management of these heritage places. It was revealed that TMS’s are effective in the conservation of natural heritage, however in relation to heritage forms with cultural manifestations, there are major disparities. These range from differences in appreciation and perception of value within communities leading to vandalism, over emphasis in the conservation of the intangible element as opposed to the tangible. This leaves the tangible element at risk. Despite these issues, TMS are a reliable knowledge base which enables more holistic conservation approaches for cultural and natural heritage. Keywords: communities, cultural intangible, tangible heritage, traditional management systems, naturalProcedia PDF Downloads 227 1473 The Importance of Student Feedback in Development of Virtual Engineering Laboratories Authors: A. A. Altalbe, N. W Bergmann Abstract:There has been significant recent interest in on-line learning, as well as considerable work on developing technologies for virtual laboratories for engineering students. After reviewing the state-of-the-art of virtual laboratories, this paper steps back from the technology issues to look in more detail at the pedagogical issues surrounding virtual laboratories, and examines the role of gathering student feedback in the development of such laboratories. The main contribution of the paper is a set of student surveys before and after a prototype deployment of a simulation laboratory tool, and the resulting analysis which leads to some tentative guidelines for the design of virtual engineering laboratories. Keywords: engineering education, elearning, electrical engineering, virtual laboratoriesProcedia PDF Downloads 269 1472 Keypoints Extraction for Markerless Tracking in Augmented Reality Applications: A Case Study in Dar As-Saraya Museum Authors: Jafar W. Al-Badarneh, Abdalkareem R. Al-Hawary, Abdulmalik M. Morghem, Mostafa Z. Ali, Rami S. Al-Gharaibeh Abstract:Archeological heritage is at the heart of each country’s national glory. Moreover, it could develop into a source of national income. Heritage management requires socially-responsible marketing that achieves high visitor satisfaction while maintaining high site conservation. We have developed an Augmented Reality (AR) experience for heritage and cultural reservation at Dar-As-Saraya museum in Jordan. Our application of this notion relied on markerless-based tracking approach. This approach uses keypoints extraction technique where features of the environment are identified and defined into the system as keypoints. A set of these keypoints forms a tracker for an augmented object to be displayed and overlaid with a real scene at Dar As-Saraya museum. We tested and compared several techniques for markerless tracking and then applied the best technique to complete a mosaic artifact with AR content. The successful results from our application open the door for applications in open archeological sites where markerless tracking is mostly needed. Keywords: augmented reality, cultural heritage, keypoints extraction, virtual recreationProcedia PDF Downloads 259 1471 Augmented Reality: New Relations with the Architectural Heritage Education Authors: Carla Maria Furuno Rimkus Abstract:The technologies related to virtual reality and augmented reality in combination with mobile technologies, are being more consolidated and used each day. The increasing technological availability along with the decrease of their acquisition and maintenance costs, have favored the expansion of its use in the field of historic heritage. In this context it is focused, in this article, on the potential of mobile applications in the dissemination of the architectural heritage, using the technology of Augmented Reality. From this perspective approach, it is discussed about the process of producing an application for mobile devices on the Android platform, which combines the technologies of geometric modeling with augmented reality (AR) and access to interactive multimedia contents with cultural, social and historic information of the historic building that we take as the object of study: a block with a set of buildings built in the XVIII century, known as "Quarteirão dos Trapiches", which was modeled in 3D, coated with the original texture of its facades and displayed on AR. From this perspective approach, this paper discusses about methodological aspects of the development of this application regarding to the process and the project development tools, and presents our considerations on methodological aspects of developing an application for the Android system, focused on the dissemination of the architectural heritage, in order to encourage the tourist potential of the city in a sustainable way and to contribute to develop the digital documentation of the heritage of the city, meeting a demand of tourists visiting the city and the professionals who work in the preservation and restoration of it, consisting of architects, historians, archaeologists, museum specialists, among others. Keywords: augmented reality, architectural heritage, geometric modeling, mobile applicationsProcedia PDF Downloads 410 1470 Virtual Reality Design Platform to Easily Create Virtual Reality Experiences Authors: J. Casteleiro- Pitrez Abstract:The interest in Virtual Reality (VR) keeps increasing among the community of designers. To develop this type of immersive experience, the understanding of new processes and methodologies is as fundamental as its complex implementation which usually implies hiring a specialized team. In this paper, we introduce a case study, a platform that allows designers to easily create complex VR experiences, present its features, and its development process. We conclude that this platform provides a complete solution for the design and development of VR experiences, no-code needed. Keywords: creatives, designers, virtual reality, virtual reality design platform, virtual reality system, no-codingProcedia PDF Downloads 63 1469 A Preliminary Development of Virtual Sight-Seeing Website for Thai Temples on Rattanakosin Island Authors: Pijitra Jomsri Abstract:Currently, the sources of cultures and tourist attractions are presented in online documentary form only. In order to make them more virtual, the researcher then collected and presented them in the form of Virtual Temple. The prototype, which is a replica of the actual location, was developed to the website and allows people who are interested in Rattanakosin Island can see in form of Panorama Pan View. By this way, anyone can access the data and appreciate the beauty of Rattanakosin Island in the virtual model like the real place. The result from the experiment showed that the levels of the knowledge on Thai temples in Rattanakosin Island increased; moreover, the users were highly satisfied with the systems. It can be concluded that virtual temples can support to publicize Thai arts, cultures and travels, as well as it can be utilized effectively. Keywords: virtual sight-seeing, Rattanakosin Island, Thai temples, virtual templeProcedia PDF Downloads 252 1468 Understanding of Heritage Values within University Education Systems in the Kingdom of Saudi Arabia Authors: Mahmoud Tarek Mohamed Hammad Abstract:Despite the importance of the role and efforts made by the universities of the Kingdom of Saudi Arabia in reviving and preserving heritage architecture as an important cultural heritage in the Kingdom, The idea revolves around restoration and conservation processes and neglects the architectural heritage values, whose content can be used in sustainable contemporary architectural works. Educational values based on heritage architecture and how to integrate with the contemporary requirements were investigated in this research. For this purpose, by understanding the heritage architectural values as well as educational, academic process, the researcher presented an educational model of questionnaire forms for architecture students and the staff at the Architecture Department at Al-Baha University as a case study that serves the aims of the research. The results of the research show that heritage values especially those interview results are considered as a positive indicator of the importance of these values. The students and the staff need both to gain an understanding of heritage values as well as an understanding of theories of incorporating those values into the design process of contemporary local architecture. The research concludes that a correct understanding of the heritage values, its performance, and its reintegration with modern architecture technology should be focused on architectural education. Keywords: heritage architecture, academic work, heritage values, sustainable contemporary local architecturalProcedia PDF Downloads 64 1467 Comparative Analysis of Real and Virtual Garment Fit Authors: Kristina Ancutiene Abstract:The goal of this research is to perform comparative analysis between the virtual fit of the woman's dress and the fit on a real person. The dress fitting was done using mechanical and structural parameters of the 100 % linen fabric and using Modaris_3D_Fit software (CAD Lectra). The dress was also sawn after which garment fit differences of real and virtual dress was researched. Four respondents whose figures were similar were used to evaluate the ease and strain deformations of the real and virtual dress. The scores that were given by the respondents wearing the real dress were compared to the ease and strain results that were given by the software. The main result was that respondents feel similar to the virtual stretch deformations but their ease feeling is not always matching the virtual ones. The results may be influenced by psychological factors and different understanding about purpose of garment. Keywords: virtual garment, 3D CAD, garment fit, mechanical propertiesProcedia PDF Downloads 212 1466 A Critical Evaluation of the Factors that Influence Visitor Engagement with U.K. Slavery Heritage Museums: A Passive Symbolic Netnographic Study Authors: Shemroy Roberts Abstract:Despite minor theoretical contributions in slavery heritage tourism research that have commented on the demand-side perspective, visitor behavior and engagement with slavery heritage attractions remain unexplored. Thus, there is a need for empirical studies and theoretical knowledge to understand visitor engagement with slavery heritage attractions, particularly U.K. slavery heritage museums. The purpose of this paper is to critically evaluate the factors that influence visitor engagement with U.K. slavery heritage museums. This qualitative research utilizes a passive symbolic ethnographic methodology. Seven U.K. slavery heritage museums will be used to collect data through unobtrusive internet-mediated observations of TripAdvisor reviews and online semi-structured interviews with managers and curators. Preliminary findings indicate that social media, prior knowledge, multiple motivations, cultural capital, and the design and layout of exhibits influence visitor engagement with slavery heritage museums. This research contributes to an understanding of visitor engagement with U.K. slavery heritage museums. The findings of this paper will provide insights into the factors that influence visitor engagement with U.K. slavery heritage museums to managers, curators, and decision-makers responsible for designing and managing those attractions. Therefore, the results of this paper will enable museum professionals to better manage visitor engagement with slavery heritage museums. Keywords: museums, netnography, slavery, visitor engagementProcedia PDF Downloads 107 1465 Developing a Model for Information Giving Behavior in Virtual Communities Authors: Pui-Lai To, Chechen Liao, Tzu-Ling Lin Abstract:Virtual communities have created a range of new social spaces in which to meet and interact with one another. Both as a stand-alone model or as a supplement to sustain competitive advantage for normal business models, building virtual communities has been hailed as one of the major strategic innovations of the new economy. However for a virtual community to evolve, the biggest challenge is how to make members actively give information or provide advice. Even in busy virtual communities, usually, only a small fraction of members post information actively. In order to investigate the determinants of information giving willingness of those contributors who usually actively provide their opinions, we proposed a model to understand the reasons for contribution in communities. The study will definitely serve as a basis for the future growth of information giving in virtual communities. Keywords: information giving, social identity, trust, virtual communityProcedia PDF Downloads 225 1464 Highlighting Adverse Effects of Privatization of Heritage on Taj Mahal and Providing Solutions to Improve the Condition without Privatizing Authors: Avani Saraswat Abstract:The paper studies the present condition of Taj Mahal (the UNESCO world heritage site) and the reasons behind deterioration. Analysis is done to explore the reasons behind this building to be included in the list of adopt heritage scheme, by the Government of India. The aim is to find out the future effects on Taj Mahal after being adopted by a private body. Finally, it suggests solutions which can lead to improvement of the present condition of the building. In order to establish a research, a further analysis is done through a case study of Red Fort, New Delhi (another UNESCO world heritage site). This monument was given to Dalmia Group of India Pvt. Ltd. for the tenure of 5 years. Paper discusses the consequences of privatization on Red Fort and then analyze it for Taj Mahal. It terms monument as riches of a heritage chest, not as a commercial tourist place. The study is concluded with the ideas and suggestions proposed for saving Taj Mahal and advantages on improving the health of the building.
https://publications.waset.org/abstracts/search?q=virtual%20heritage
A contrastive dimensionality reduction approach (CDR) is proposed for interactive visual cluster analysis that employs link-based interactions to steer embeddings and outperforms existing techniques in terms of preserving correct neighborhood structures and improving visual cluster separation. Incorporation of Human Knowledge into Data Embeddings to Improve Pattern Significance and Interpretability - Computer ScienceIEEE Transactions on Visualization and Computer Graphics - 2023 An approach that incorporates human knowledge into data embeddings to improve pattern significance and interpretability by externalizing tacit human knowledge as explicit sample labels and adding a classification loss in the embedding network to encode samples' classes is proposed. PRAGMA: Interactively Constructing Functional Brain Parcellations - Computer Science2020 IEEE Visualization Conference (VIS) - 2020 An interactive visualization method, PRAGMA, that allows domain experts to derive scan-specific parcellations from established atlases and shows the potential to enable exploration of individualized and state-specific brainParcellations and to offer interesting insights into functional brain networks is presented. A Survey of Human‐Centered Evaluations in Human‐Centered Machine Learning - Computer ScienceComput. Graph. Forum - 2021 This survey provides a comprehensive overview of evaluations in the field of human‐centered machine learning, focusing on human‐related factors that influence trust, interpretability, and explainability. DPEBic: detecting essential proteins in gene expressions using encoding and biclustering algorithm - Computer Science, Biology - 2021 This research proposes an optimized method, DPEBic, an algorithm for detecting essential proteins by biclustering and encoding each protein by incorporating bic Lustering with gene-encoding to detect co-expressed essential proteins. Evaluating the Benefits of Explicit and Semi-Automated Clusters for Immersive Sensemaking - Computer Science2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) - 2022 A semi-automated cluster creation technique that determines the user’s intent to create a cluster based on object proximity is designed, and the results provide support for the approach of adding intelligent semantic interactions to aid the users of immersive analytics systems. References SHOWING 1-10 OF 57 REFERENCES Interactive visual exploration and refinement of cluster assignments - Computer SciencebioRxiv - 2017 A method that explicitly visualizes the quality of cluster assignments, allows comparisons of clustering results and enables analysts to manually curate and refine cluster assignments is introduced, applicable to matrix data clustered with partitional, hierarchical, and fuzzy clustering algorithms. ClusterSculptor: A Visual Analytics Tool for High-Dimensional Data - Computer Science2007 IEEE Symposium on Visual Analytics Science and Technology - 2007 This paper describes a comprehensive and intuitive framework to aid scientists in the derivation of classification hierarchies in CA, using k-means as the overall clustering engine, but allowing them to tune its parameters interactively based on a non-distorted compact visual presentation of the inherent characteristics of the data in high- dimensional space. Clustrophile 2: Guided Visual Clustering Analysis - Computer ScienceIEEE Transactions on Visualization and Computer Graphics - 2019 Clustrophile 2, a new interactive tool for guided clustering analysis that guides users in clustering-based exploratory analysis, adapts user feedback to improve user guidance, facilitates the interpretation of clusters, and helps quickly reason about differences between clusterings is introduced. SOMFlow: Guided Exploratory Cluster Analysis with Self-Organizing Maps and Analytic Provenance - Computer ScienceIEEE Transactions on Visualization and Computer Graphics - 2018 A multi-stage Visual Analytics approach for iterative cluster refinement together with an implementation that uses Self-Organizing Maps (SOM) to analyze time series data and enhanced understanding of clustering results as well as the interactive process itself is presented. Clustervision: Visual Supervision of Unsupervised Clustering - Computer ScienceIEEE Transactions on Visualization and Computer Graphics - 2018 Clustervision is a visual analytics tool that helps ensure data scientists find the right clustering among the large amount of techniques and parameters available and empowers users to choose an effective representation of their complex data. Clustrophile: A Tool for Visual Clustering Analysis - Computer ScienceArXiv - 2017 Clustrophile is introduced, an interactive tool for iteratively computing discrete and continuous data clusters, rapidly exploring different choices of clustering parameters, and reasoning about clustering instances in relation to data dimensions. Visually comparing multiple partitions of data with applications to clustering - Computer ScienceElectronic Imaging - 2009 This work extends Parallel Sets to a new visualization tool which provides for the mutual comparison and evaluation of multiple partitions of the same dataset and describes a novel layout algorithm for informatively rearranging the order of records and dimensions. VISTA: Validating and Refining Clusters Via Visualization - Computer ScienceInf. Vis. - 2004 This paper addresses the problem of clustering and validating arbitrarily shaped clusters with a visual framework (VISTA) to capitalize on the power of visualization and interactive feedbacks to encourage domain experts to participate in the clustering revision and clustering validation process. iVisClustering: An Interactive Visual Document Clustering via Topic Modeling - Computer ScienceComput. Graph. Forum - 2012 An interactive visual analytics system for document clustering, called iVisClustering, is proposed based on a widely‐used topic modeling method, latent Dirichlet allocation (LDA), which provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. Interactively Exploring Hierarchical Clustering Results - Computer Science, BiologyComputer - 2002 The Hierarchical Clustering Explorer integrates four interactive features to provide information visualization techniques that allow users to control the processes and interact with the results.
https://www.semanticscholar.org/paper/Geono-Cluster%3A-Interactive-Visual-Cluster-Analysis-Das-Saket/0f5145439878f7b0c055964321868f9da48a8f21
Clustering in machine learning is an unsupervised learning set of algorithms that divide objects into similar clusters based on similar characteristics. What is Clustering in Machine Learning? Clustering is used to group similar data points together based on their characteristics. Clustering machine-learning algorithms are grouping similar elements in such a way that the distance between each element of the cluster are closer to each other than to any other cluster. Example of Clustering Algorithms Here are the 3 most popular clustering algorithms that we will cover in this article: - KMeans - Hierarchical Clustering - DBSCAN Below we show an overview of other Scikit-learn‘s clustering methods Examples of clustering problems - Recommender systems - Semantic clustering - Customer segmentation - Targetted marketing How do clustering algorithms work? Each clustering algorithm works differently than the other, but the logic of KMeans and Hierarchical clustering is similar. Clustering machine learning algorithm work by: - Selecting cluster centers - Computing distances between data points to cluster centers, or between each cluster centers. - Redefining cluster center based on the resulting distances. - Repeating the process until the optimal clusters are reached This is an overly simplified view of clustering, but we will dive deeper into how each algorithm works specifically in the next sections. How does KMeans Clustering Work? Kmeans clustering algorithm works by starting with a fixed set of clusters and moving the cluster centres until the optimal clustering is met. - Defining a number of clusters at the start - Selecting random cluster centers - Computing distances between each point to cluster center - Finding new cluster centers using the mean of distances - Repeating until convergence. Some examples of KMeans clustering algorithms are: KMeansfrom Scikit-learn’s sklearn.cluster kmeansfrom SciPy’s scipy.cluster.vq How does Hierarchical Clustering Work? Hierarchical clustering algorithm works by starting with 1 cluster per data point and merging the clusters together until the optimal clustering is met. - Having 1 cluster for each data point - Defining new cluster centers using the mean of X and Y coordinates - Combining clusters centers closest to each other - Finding new cluster centers based on the mean - Repeating until optimal number of clusters is met The image below represents a dendrogram that can be used to visualize hierarchical clustering. Starting with 1 cluster per data point at the bottom and merging the closest clusters at each iteration, ending up with a single cluster for the entire dataset. Some examples of hierarchical clustering algorithms are: hirearchyfrom SciPy’s scipy.cluster How does DBSCAN Clustering Work? DBSCAN stands for Density-Based Spatial Clustering of Applications and Noise. DBSCAN clustering algorithm works by assuming that the clusters are regions with high-density data points separated by regions of low-density. Some examples of DBSCAN clustering algorithms are: DBSCANfrom Scikit-learn sklearn.cluster - HDBSCAN How does Gaussian Mixture Clustering Models Work? Gaussian Mixture Models, or GMMs, are probabilistic models that look at Gaussian distributions, also known as normal distributions, to cluster data points together. By looking at a certain number of Gaussian distributions, the models assume that each distribution is a separate cluster. Some examples of Gaussian mixture clustering algorithms are: GaussianMixturefrom Scikit-learn’s sklearn.mixture Interesting Work from the Community How to Master the Popular DBSCAN Clustering Algorithm for Machine Learning by Abhishek Sharma Build Better and Accurate Clusters with Gaussian Mixture Models Python Script: Automatically Cluster Keywords In Bulk For Actionable Insights V2 by Lee Foot Polyfuzz auto-mapping + auto-grouping tests by Charly Wargnier Conclusion This concludes the introduction of clustering in machine learning. We have covered how clustering works and provided an overview of the most common clustering machine learning models. The next step is to learn how to use Scikit-learn to train each clustering machine learning model on real data. Sr SEO Specialist at Seek (Melbourne, Australia). Specialized in technical SEO. In a quest to programmatic SEO for large organizations through the use of Python, R and machine learning.
https://www.jcchouinard.com/clustering-in-machine-learning/
Bayesian methods for hierarchical clustering and community discovery. Doctoral thesis , UCL (University College London). Abstract Discovering clusters in data is a common goal of statistical data analysis. Two kinds of clustering, hierarchical clustering and community discovery, are considered here, as well as their composition|discovering hierarchies of communities. Hierarchical clustering discovers hierarchies of clusters of data, represented as a tree whose leaves are the data. Community discovery nds clusters of people, most commonly from the adjacency matrix of a graph of the relationships between the people. We shall leverage Bayesian statistics to construct several models and corresponding e cient learning algorithms for discovering hierarchies, communities and hierarchies of communities. This thesis has three main contributions, each being a model and a learning algorithm for tackling one of these clustering problems. First we develop an e cient model-based hierarchical clustering algorithm using greedy model selection. Unlike many other hierarchical clustering algorithms, our model is not necessarily a binary tree, but can be any tree where each internal node may have any number of children. This can lead to simpler hierarchies that we nd are just as predictive of the data, but are more interpretable as the hierarchies are less visually cluttered and the underlying model has fewer parameters than a binary tree-based model. We then adapt this hierarchical clustering model and algorithm to discovering communities in social networks based upon their adjacency matrix, where the leaves of the discovered tree correspond to people. This adaptation is not straightforward as a na ve adaptation leads to an ine cient learning algorithm. We develop a dynamic programming scheme and number of approximations that yield several fast algorithms. We then show empirically that these approximations are faster than the In nite Relational Model, producing similar or better predictions in less time for the task of predicting unobserved edges in a graph. Finally we tackle the problem of discovering communities directly from interactions among individuals, rather than from the adjacency matrix of a graph. We develop a model that uses a statistical notion of reciprocity to discover communities from time-series interaction data. We then develop an Markov Chain Monte Carlo method for inference and show empirically that this model is much better at predicting future interactions among individuals than several alternate models.
https://discovery.ucl.ac.uk/id/eprint/1466632/
dendextend is an R package for creating and comparing visually appealing tree diagrams. dendextend provides utility functions for manipulating dendrogram objects (their color, shape, and content) as well as several advanced methods for comparing trees to one another (both statistically and visually). As such, dendextend offers a flexible framework for enhancing R’s rich ecosystem of packages for performing hierarchical clustering of items. 0 How frequently do clusters occur in hierarchical clustering analysis? A graph theoretical approach to studying ties in proximity - OPEN - Journal of cheminformatics - Published over 4 years ago - Discuss Hierarchical cluster analysis (HCA) is a widely used classificatory technique in many areas of scientific knowledge. Applications usually yield a dendrogram from an HCA run over a given data set, using a grouping algorithm and a similarity measure. However, even when such parameters are fixed, ties in proximity (i.e. two equidistant clusters from a third one) may produce several different dendrograms, having different possible clustering patterns (different classifications). This situation is usually disregarded and conclusions are based on a single result, leading to questions concerning the permanence of clusters in all the resulting dendrograms; this happens, for example, when using HCA for grouping molecular descriptors to select that less similar ones in QSAR studies. 0 Optimized leaf ordering with class labels for hierarchical clustering - Journal of bioinformatics and computational biology - Published over 5 years ago - Discuss Hierarchical clustering is extensively used in the bioinformatics community to analyze biomedical data. These data are often tagged with class labels, as e.g. disease subtypes or gene ontology (GO) terms. Heatmaps in connection with dendrograms are the common standard to visualize results of hierarchical clustering. The heatmap can be enriched by an additional color bar at the side, indicating for each instance in the data set to which class it belongs. In the ideal case, when the clustering matches perfectly with the classes, one would expect that instances from the same class cluster together and the color bar consists of well-separated color blocks without frequent alteration of colors (classes). But even in the case when instances from the same class cluster perfectly together, the dendrogram might not reflect this important aspect due to the fact that its representation is not unique. In this paper, we propose a leaf ordering algorithm for the dendrogram that preserving the hierarchical clustering result tries to group instances from the same class together. It is based on the concept of dynamic programming which can efficiently compute the optimal or nearly optimal order, consistent with the structure of the tree.
http://scicombinator.com/concepts/dendrogram/articles
Abstract:Clustering is a process of grouping objects and data into groups of clusters to ensure that data objects from the same cluster are identical to each other. Clustering algorithms in one of the areas in data mining and it can be classified into partition, hierarchical, density based, and grid-based. Therefore, in this paper, we do a survey and review for four major hierarchical clustering algorithms called CURE, ROCK, CHAMELEON, and BIRCH. The obtained state of the art of these algorithms will help in eliminating the current problems, as well as deriving more robust and scalable algorithms for clustering. Keywords: clustering, unsupervised learning, algorithms, hierarchicalProcedia PDF Downloads 472 491 Hybrid Hierarchical Clustering Approach for Community Detection in Social Network Authors: Radhia Toujani, Jalel Akaichi Abstract:Social Networks generally present a hierarchy of communities. To determine these communities and the relationship between them, detection algorithms should be applied. Most of the existing algorithms, proposed for hierarchical communities identification, are based on either agglomerative clustering or divisive clustering. In this paper, we present a hybrid hierarchical clustering approach for community detection based on both bottom-up and bottom-down clustering. Obviously, our approach provides more relevant community structure than hierarchical method which considers only divisive or agglomerative clustering to identify communities. Moreover, we performed some comparative experiments to enhance the quality of the clustering results and to show the effectiveness of our algorithm. Keywords: agglomerative hierarchical clustering, community structure, divisive hierarchical clustering, hybrid hierarchical clustering, opinion mining, social network, social network analysisProcedia PDF Downloads 220 490 Performance Analysis of Hierarchical Agglomerative Clustering in a Wireless Sensor Network Using Quantitative Data Authors: Tapan Jain, Davender Singh Saini Abstract:Clustering is a useful mechanism in wireless sensor networks which helps to cope with scalability and data transmission problems. The basic aim of our research work is to provide efficient clustering using Hierarchical agglomerative clustering (HAC). If the distance between the sensing nodes is calculated using their location then it’s quantitative HAC. This paper compares the various agglomerative clustering techniques applied in a wireless sensor network using the quantitative data. The simulations are done in MATLAB and the comparisons are made between the different protocols using dendrograms. Keywords: routing, hierarchical clustering, agglomerative, quantitative, wireless sensor networkProcedia PDF Downloads 414 489 Flowing Online Vehicle GPS Data Clustering Using a New Parallel K-Means Algorithm Authors: Orhun Vural, Oguz Bayat, Rustu Akay, Osman N. Ucan Abstract:This study presents a new parallel approach clustering of GPS data. Evaluation has been made by comparing execution time of various clustering algorithms on GPS data. This paper aims to propose a parallel based on neighborhood K-means algorithm to make it faster. The proposed parallelization approach assumes that each GPS data represents a vehicle and to communicate between vehicles close to each other after vehicles are clustered. This parallelization approach has been examined on different sized continuously changing GPS data and compared with serial K-means algorithm and other serial clustering algorithms. The results demonstrated that proposed parallel K-means algorithm has been shown to work much faster than other clustering algorithms. Keywords: parallel k-means algorithm, parallel clustering, clustering algorithms, clustering on flowing dataProcedia PDF Downloads 91 488 Semi-Supervised Hierarchical Clustering Given a Reference Tree of Labeled Documents Authors: Ying Zhao, Xingyan Bin Abstract:Semi-supervised clustering algorithms have been shown effective to improve clustering process with even limited supervision. However, semi-supervised hierarchical clustering remains challenging due to the complexities of expressing constraints for agglomerative clustering algorithms. This paper proposes novel semi-supervised agglomerative clustering algorithms to build a hierarchy based on a known reference tree. We prove that by enforcing distance constraints defined by a reference tree during the process of hierarchical clustering, the resultant tree is guaranteed to be consistent with the reference tree. We also propose a framework that allows the hierarchical tree generation be aware of levels of levels of the agglomerative tree under creation, so that metric weights can be learned and adopted at each level in a recursive fashion. The experimental evaluation shows that the additional cost of our contraint-based semi-supervised hierarchical clustering algorithm (HAC) is negligible, and our combined semi-supervised HAC algorithm outperforms the state-of-the-art algorithms on real-world datasets. The experiments also show that our proposed methods can improve clustering performance even with a small number of unevenly distributed labeled data. Keywords: semi-supervised clustering, hierarchical agglomerative clustering, reference trees, distance constraintsProcedia PDF Downloads 409 487 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis Authors: C. B. Le, V. N. Pham Abstract:In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods. Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clusteringProcedia PDF Downloads 44 486 ACOPIN: An ACO Algorithm with TSP Approach for Clustering Proteins in Protein Interaction Networks Authors: Jamaludin Sallim, Rozlina Mohamed, Roslina Abdul Hamid Abstract:In this paper, we proposed an Ant Colony Optimization (ACO) algorithm together with Traveling Salesman Problem (TSP) approach to investigate the clustering problem in Protein Interaction Networks (PIN). We named this combination as ACOPIN. The purpose of this work is two-fold. First, to test the efficacy of ACO in clustering PIN and second, to propose the simple generalization of the ACO algorithm that might allow its application in clustering proteins in PIN. We split this paper to three main sections. First, we describe the PIN and clustering proteins in PIN. Second, we discuss the steps involved in each phase of ACO algorithm. Finally, we present some results of the investigation with the clustering patterns. Keywords: ant colony optimization algorithm, searching algorithm, protein functional module, protein interaction networkProcedia PDF Downloads 468 485 Spectral Clustering for Manufacturing Cell Formation Authors: Yessica Nataliani, Miin-Shen Yang Abstract:Cell formation (CF) is an important step in group technology. It is used in designing cellular manufacturing systems using similarities between parts in relation to machines so that it can identify part families and machine groups. There are many CF methods in the literature, but there is less spectral clustering used in CF. In this paper, we propose a spectral clustering algorithm for machine-part CF. Some experimental examples are used to illustrate its efficiency. Overall, the spectral clustering algorithm can be used in CF with a wide variety of machine/part matrices. Keywords: group technology, cell formation, spectral clustering, grouping efficiencyProcedia PDF Downloads 263 484 Investigation of Clustering Algorithms Used in Wireless Sensor Networks Authors: Naim Karasekreter, Ugur Fidan, Fatih Basciftci Abstract:Wireless sensor networks are networks in which more than one sensor node is organized among themselves. The working principle is based on the transfer of the sensed data over the other nodes in the network to the central station. Wireless sensor networks concentrate on routing algorithms, energy efficiency and clustering algorithms. In the clustering method, the nodes in the network are divided into clusters using different parameters and the most suitable cluster head is selected from among them. The data to be sent to the center is sent per cluster, and the cluster head is transmitted to the center. With this method, the network traffic is reduced and the energy efficiency of the nodes is increased. In this study, clustering algorithms were examined in terms of clustering performances and cluster head selection characteristics to try to identify weak and strong sides. This work is supported by the Project 17.Kariyer.123 of Afyon Kocatepe University BAP Commission. Keywords: wireless sensor networks (WSN), clustering algorithm, cluster head, clusteringProcedia PDF Downloads 396 483 A Comparative Study of Multi-SOM Algorithms for Determining the Optimal Number of Clusters Authors: Imèn Khanchouch, Malika Charrad, Mohamed Limam Abstract:The interpretation of the quality of clusters and the determination of the optimal number of clusters is still a crucial problem in clustering. We focus in this paper on multi-SOM clustering method which overcomes the problem of extracting the number of clusters from the SOM map through the use of a clustering validity index. We then tested multi-SOM using real and artificial data sets with different evaluation criteria not used previously such as Davies Bouldin index, Dunn index and silhouette index. The developed multi-SOM algorithm is compared to k-means and Birch methods. Results show that it is more efficient than classical clustering methods. Keywords: clustering, SOM, multi-SOM, DB index, Dunn index, silhouette indexProcedia PDF Downloads 457 482 A Fuzzy Kernel K-Medoids Algorithm for Clustering Uncertain Data Objects Authors: Behnam Tavakkol Abstract:Uncertain data mining algorithms use different ways to consider uncertainty in data such as by representing a data object as a sample of points or a probability distribution. Fuzzy methods have long been used for clustering traditional (certain) data objects. They are used to produce non-crisp cluster labels. For uncertain data, however, besides some uncertain fuzzy k-medoids algorithms, not many other fuzzy clustering methods have been developed. In this work, we develop a fuzzy kernel k-medoids algorithm for clustering uncertain data objects. The developed fuzzy kernel k-medoids algorithm is superior to existing fuzzy k-medoids algorithms in clustering data sets with non-linearly separable clusters. Keywords: clustering algorithm, fuzzy methods, kernel k-medoids, uncertain dataProcedia PDF Downloads 65 481 An Experimental Study on Some Conventional and Hybrid Models of Fuzzy Clustering Authors: Jeugert Kujtila, Kristi Hoxhalli, Ramazan Dalipi, Erjon Cota, Ardit Murati, Erind Bedalli Abstract:Clustering is a versatile instrument in the analysis of collections of data providing insights of the underlying structures of the dataset and enhancing the modeling capabilities. The fuzzy approach to the clustering problem increases the flexibility involving the concept of partial memberships (some value in the continuous interval [0, 1]) of the instances in the clusters. Several fuzzy clustering algorithms have been devised like FCM, Gustafson-Kessel, Gath-Geva, kernel-based FCM, PCM etc. Each of these algorithms has its own advantages and drawbacks, so none of these algorithms would be able to perform superiorly in all datasets. In this paper we will experimentally compare FCM, GK, GG algorithm and a hybrid two-stage fuzzy clustering model combining the FCM and Gath-Geva algorithms. Firstly we will theoretically dis-cuss the advantages and drawbacks for each of these algorithms and we will describe the hybrid clustering model exploiting the advantages and diminishing the drawbacks of each algorithm. Secondly we will experimentally compare the accuracy of the hybrid model by applying it on several benchmark and synthetic datasets. Keywords: fuzzy clustering, fuzzy c-means algorithm (FCM), Gustafson-Kessel algorithm, hybrid clustering modelProcedia PDF Downloads 344 480 Using Closed Frequent Itemsets for Hierarchical Document Clustering Authors: Cheng-Jhe Lee, Chiun-Chieh Hsu Abstract:Due to the rapid development of the Internet and the increased availability of digital documents, the excessive information on the Internet has led to information overflow problem. In order to solve these problems for effective information retrieval, document clustering in text mining becomes a popular research topic. Clustering is the unsupervised classification of data items into groups without the need of training data. Many conventional document clustering methods perform inefficiently for large document collections because they were originally designed for relational database. Therefore they are impractical in real-world document clustering and require special handling for high dimensionality and high volume. We propose the FIHC (Frequent Itemset-based Hierarchical Clustering) method, which is a hierarchical clustering method developed for document clustering, where the intuition of FIHC is that there exist some common words for each cluster. FIHC uses such words to cluster documents and builds hierarchical topic tree. In this paper, we combine FIHC algorithm with ontology to solve the semantic problem and mine the meaning behind the words in documents. Furthermore, we use the closed frequent itemsets instead of only use frequent itemsets, which increases efficiency and scalability. The experimental results show that our method is more accurate than those of well-known document clustering algorithms. Keywords: FIHC, documents clustering, ontology, closed frequent itemsetProcedia PDF Downloads 221 479 Improved K-Means Clustering Algorithm Using RHadoop with Combiner Authors: Ji Eun Shin, Dong Hoon Lim Abstract:Data clustering is a common technique used in data analysis and is used in many applications, such as artificial intelligence, pattern recognition, economics, ecology, psychiatry and marketing. K-means clustering is a well-known clustering algorithm aiming to cluster a set of data points to a predefined number of clusters. In this paper, we implement K-means algorithm based on MapReduce framework with RHadoop to make the clustering method applicable to large scale data. RHadoop is a collection of R packages that allow users to manage and analyze data with Hadoop. The main idea is to introduce a combiner as a function of our map output to decrease the amount of data needed to be processed by reducers. The experimental results demonstrated that K-means algorithm using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also showed that our K-means algorithm using RHadoop with combiner was faster than regular algorithm without combiner as the size of data set increases. Keywords: big data, combiner, K-means clustering, RHadoopProcedia PDF Downloads 284 478 Application of Data Mining for Aquifer Environmental Assessment Authors: Saman Javadi, Mehdi Hashemy, Mohahammad Mahmoodi Abstract:Vulnerability maps are employed as an important solution in order to handle entrance of pollution into the aquifers. The common way to provide vulnerability map is DRASTIC. Meanwhile, application of the method is not easy to apply for any aquifer due to choosing appropriate constant values of weights and ranks. In this study, a new approach using k-means clustering is applied to make vulnerability maps. Four features of depth to groundwater, hydraulic conductivity, recharge value and vadose zone were considered at the same time as features of clustering. Five regions are recognized out of the case study represent zones with different level of vulnerability. The finding results show that clustering provides a realistic vulnerability map so that, Pearson’s correlation coefficients between nitrate concentrations and clustering vulnerability is obtained 61%. Keywords: clustering, data mining, groundwater, vulnerability assessmentProcedia PDF Downloads 479 477 3D Mesh Coarsening via Uniform Clustering Authors: Shuhua Lai, Kairui Chen Abstract:In this paper, we present a fast and efficient mesh coarsening algorithm for 3D triangular meshes. Theis approach can be applied to very complex 3D meshes of arbitrary topology and with millions of vertices. The algorithm is based on the clustering of the input mesh elements, which divides the faces of an input mesh into a given number of clusters for clustering purpose by approximating the Centroidal Voronoi Tessellation of the input mesh. Once a clustering is achieved, it provides us an efficient way to construct uniform tessellations, and therefore leads to good coarsening of polygonal meshes. With proliferation of 3D scanners, this coarsening algorithm is particularly useful for reverse engineering applications of 3D models, which in many cases are dense, non-uniform, irregular and arbitrary topology. Examples demonstrating effectiveness of the new algorithm are also included in the paper. Keywords: coarsening, mesh clustering, shape approximation, mesh simplificationProcedia PDF Downloads 230 476 Multimodal Optimization of Density-Based Clustering Using Collective Animal Behavior Algorithm Authors: Kristian Bautista, Ruben A. Idoy Abstract:A bio-inspired metaheuristic algorithm inspired by the theory of collective animal behavior (CAB) was integrated to density-based clustering modeled as multimodal optimization problem. The algorithm was tested on synthetic, Iris, Glass, Pima and Thyroid data sets in order to measure its effectiveness relative to CDE-based Clustering algorithm. Upon preliminary testing, it was found out that one of the parameter settings used was ineffective in performing clustering when applied to the algorithm prompting the researcher to do an investigation. It was revealed that fine tuning distance δ3 that determines the extent to which a given data point will be clustered helped improve the quality of cluster output. Even though the modification of distance δ3 significantly improved the solution quality and cluster output of the algorithm, results suggest that there is no difference between the population mean of the solutions obtained using the original and modified parameter setting for all data sets. This implies that using either the original or modified parameter setting will not have any effect towards obtaining the best global and local animal positions. Results also suggest that CDE-based clustering algorithm is better than CAB-density clustering algorithm for all data sets. Nevertheless, CAB-density clustering algorithm is still a good clustering algorithm because it has correctly identified the number of classes of some data sets more frequently in a thirty trial run with a much smaller standard deviation, a potential in clustering high dimensional data sets. Thus, the researcher recommends further investigation in the post-processing stage of the algorithm. Keywords: clustering, metaheuristics, collective animal behavior algorithm, density-based clustering, multimodal optimizationProcedia PDF Downloads 88 475 Chemical Reaction Algorithm for Expectation Maximization Clustering Authors: Li Ni, Pen ManMan, Li KenLi Abstract:Clustering is an intensive research for some years because of its multifaceted applications, such as biology, information retrieval, medicine, business and so on. The expectation maximization (EM) is a kind of algorithm framework in clustering methods, one of the ten algorithms of machine learning. Traditionally, optimization of objective function has been the standard approach in EM. Hence, research has investigated the utility of evolutionary computing and related techniques in the regard. Chemical Reaction Optimization (CRO) is a recently established method. So the property embedded in CRO is used to solve optimization problems. This paper presents an algorithm framework (EM-CRO) with modified CRO operators based on EM cluster problems. The hybrid algorithm is mainly to solve the problem of initial value sensitivity of the objective function optimization clustering algorithm. Our experiments mainly take the EM classic algorithm:k-means and fuzzy k-means as an example, through the CRO algorithm to optimize its initial value, get K-means-CRO and FKM-CRO algorithm. The experimental results of them show that there is improved efficiency for solving objective function optimization clustering problems. Keywords: chemical reaction optimization, expection maimization, initia, objective function clusteringProcedia PDF Downloads 482 474 Decision Trees Constructing Based on K-Means Clustering Algorithm Authors: Loai Abdallah, Malik Yousef Abstract:A domain space for the data should reflect the actual similarity between objects. Since objects belonging to the same cluster usually share some common traits even though their geometric distance might be relatively large. In general, the Euclidean distance of data points that represented by large number of features is not capturing the actual relation between those points. In this study, we propose a new method to construct a different space that is based on clustering to form a new distance metric. The new distance space is based on ensemble clustering (EC). The EC distance space is defined by tracking the membership of the points over multiple runs of clustering algorithm metric. Over this distance, we train the decision trees classifier (DT-EC). The results obtained by applying DT-EC on 10 datasets confirm our hypotheses that embedding the EC space as a distance metric would improve the performance. Keywords: ensemble clustering, decision trees, classification, K nearest neighborsProcedia PDF Downloads 72 473 A Non-parametric Clustering Approach for Multivariate Geostatistical Data Authors: Francky Fouedjio Abstract:Multivariate geostatistical data have become omnipresent in the geosciences and pose substantial analysis challenges. One of them is the grouping of data locations into spatially contiguous clusters so that data locations within the same cluster are more similar while clusters are different from each other, in some sense. Spatially contiguous clusters can significantly improve the interpretation that turns the resulting clusters into meaningful geographical subregions. In this paper, we develop an agglomerative hierarchical clustering approach that takes into account the spatial dependency between observations. It relies on a dissimilarity matrix built from a non-parametric kernel estimator of the spatial dependence structure of data. It integrates existing methods to find the optimal cluster number and to evaluate the contribution of variables to the clustering. The capability of the proposed approach to provide spatially compact, connected and meaningful clusters is assessed using bivariate synthetic dataset and multivariate geochemical dataset. The proposed clustering method gives satisfactory results compared to other similar geostatistical clustering methods. Keywords: clustering, geostatistics, multivariate data, non-parametricProcedia PDF Downloads 233 472 Power Iteration Clustering Based on Deflation Technique on Large Scale Graphs Authors: Taysir Soliman Abstract:One of the current popular clustering techniques is Spectral Clustering (SC) because of its advantages over conventional approaches such as hierarchical clustering, k-means, etc. and other techniques as well. However, one of the disadvantages of SC is the time consuming process because it requires computing the eigenvectors. In the past to overcome this disadvantage, a number of attempts have been proposed such as the Power Iteration Clustering (PIC) technique, which is one of versions from SC; some of PIC advantages are: 1) its scalability and efficiency, 2) finding one pseudo-eigenvectors instead of computing eigenvectors, and 3) linear combination of the eigenvectors in linear time. However, its worst disadvantage is an inter-class collision problem because it used only one pseudo-eigenvectors which is not enough. Previous researchers developed Deflation-based Power Iteration Clustering (DPIC) to overcome problems of PIC technique on inter-class collision with the same efficiency of PIC. In this paper, we developed Parallel DPIC (PDPIC) to improve the time and memory complexity which is run on apache spark framework using sparse matrix. To test the performance of PDPIC, we compared it to SC, ESCG, ESCALG algorithms on four small graph benchmark datasets and nine large graph benchmark datasets, where PDPIC proved higher accuracy and better time consuming than other compared algorithms. Keywords: spectral clustering, power iteration clustering, deflation-based power iteration clustering, Apache spark, large graphProcedia PDF Downloads 44 471 Agglomerative Hierarchical Clustering Using the Tθ Family of Similarity Measures Authors: Salima Kouici, Abdelkader Khelladi Abstract:In this work, we begin with the presentation of the Tθ family of usual similarity measures concerning multidimensional binary data. Subsequently, some properties of these measures are proposed. Finally, the impact of the use of different inter-elements measures on the results of the Agglomerative Hierarchical Clustering Methods is studied. Keywords: binary data, similarity measure, Tθ measures, agglomerative hierarchical clusteringProcedia PDF Downloads 334 470 Finding Bicluster on Gene Expression Data of Lymphoma Based on Singular Value Decomposition and Hierarchical Clustering Authors: Alhadi Bustaman, Soeganda Formalidin, Titin Siswantining Abstract:DNA microarray technology is used to analyze thousand gene expression data simultaneously and a very important task for drug development and test, function annotation, and cancer diagnosis. Various clustering methods have been used for analyzing gene expression data. However, when analyzing very large and heterogeneous collections of gene expression data, conventional clustering methods often cannot produce a satisfactory solution. Biclustering algorithm has been used as an alternative approach to identifying structures from gene expression data. In this paper, we introduce a transform technique based on singular value decomposition to identify normalized matrix of gene expression data followed by Mixed-Clustering algorithm and the Lift algorithm, inspired in the node-deletion and node-addition phases proposed by Cheng and Church based on Agglomerative Hierarchical Clustering (AHC). Experimental study on standard datasets demonstrated the effectiveness of the algorithm in gene expression data. Keywords: agglomerative hierarchical clustering (AHC), biclustering, gene expression data, lymphoma, singular value decomposition (SVD)Procedia PDF Downloads 157 469 An Improved K-Means Algorithm for Gene Expression Data Clustering Authors: Billel Kenidra, Mohamed Benmohammed Abstract:Data mining technique used in the field of clustering is a subject of active research and assists in biological pattern recognition and extraction of new knowledge from raw data. Clustering means the act of partitioning an unlabeled dataset into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Several clustering methods are based on partitional clustering. This category attempts to directly decompose the dataset into a set of disjoint clusters leading to an integer number of clusters that optimizes a given criterion function. The criterion function may emphasize a local or a global structure of the data, and its optimization is an iterative relocation procedure. The K-Means algorithm is one of the most widely used partitional clustering techniques. Since K-Means is extremely sensitive to the initial choice of centers and a poor choice of centers may lead to a local optimum that is quite inferior to the global optimum, we propose a strategy to initiate K-Means centers. The improved K-Means algorithm is compared with the original K-Means, and the results prove how the efficiency has been significantly improved. Keywords: microarray data mining, biological pattern recognition, partitional clustering, k-means algorithm, centroid initializationProcedia PDF Downloads 98 468 Clustering Categorical Data Using the K-Means Algorithm and the Attribute’s Relative Frequency Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami Abstract:Clustering is a well known data mining technique used in pattern recognition and information retrieval. The initial dataset to be clustered can either contain categorical or numeric data. Each type of data has its own specific clustering algorithm. In this context, two algorithms are proposed: the k-means for clustering numeric datasets and the k-modes for categorical datasets. The main encountered problem in data mining applications is clustering categorical dataset so relevant in the datasets. One main issue to achieve the clustering process on categorical values is to transform the categorical attributes into numeric measures and directly apply the k-means algorithm instead the k-modes. In this paper, it is proposed to experiment an approach based on the previous issue by transforming the categorical values into numeric ones using the relative frequency of each modality in the attributes. The proposed approach is compared with a previously method based on transforming the categorical datasets into binary values. The scalability and accuracy of the two methods are experimented. The obtained results show that our proposed method outperforms the binary method in all cases. Keywords: clustering, unsupervised learning, pattern recognition, categorical datasets, knowledge discovery, k-meansProcedia PDF Downloads 150 467 Generalization of Clustering Coefficient on Lattice Networks Applied to Criminal Networks Authors: Christian H. Sanabria-Montaña, Rodrigo Huerta-Quintanilla Abstract:A lattice network is a special type of network in which all nodes have the same number of links, and its boundary conditions are periodic. The most basic lattice network is the ring, a one-dimensional network with periodic border conditions. In contrast, the Cartesian product of d rings forms a d-dimensional lattice network. An analytical expression currently exists for the clustering coefficient in this type of network, but the theoretical value is valid only up to certain connectivity value; in other words, the analytical expression is incomplete. Here we obtain analytically the clustering coefficient expression in d-dimensional lattice networks for any link density. Our analytical results show that the clustering coefficient for a lattice network with density of links that tend to 1, leads to the value of the clustering coefficient of a fully connected network. We developed a model on criminology in which the generalized clustering coefficient expression is applied. The model states that delinquents learn the know-how of crime business by sharing knowledge, directly or indirectly, with their friends of the gang. This generalization shed light on the network properties, which is important to develop new models in different fields where network structure plays an important role in the system dynamic, such as criminology, evolutionary game theory, econophysics, among others. Keywords: clustering coefficient, criminology, generalized, regular network d-dimensionalProcedia PDF Downloads 224 466 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem Authors: Ouafa Amira, Jiangshe Zhang Abstract:Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy. Keywords: clustering, fuzzy c-means, regularization, relative entropyProcedia PDF Downloads 77 465 Max-Entropy Feed-Forward Clustering Neural Network Authors: Xiaohan Bookman, Xiaoyan Zhu Abstract:The outputs of non-linear feed-forward neural network are positive, which could be treated as probability when they are normalized to one. If we take Entropy-Based Principle into consideration, the outputs for each sample could be represented as the distribution of this sample for different clusters. Entropy-Based Principle is the principle with which we could estimate the unknown distribution under some limited conditions. As this paper defines two processes in Feed-Forward Neural Network, our limited condition is the abstracted features of samples which are worked out in the abstraction process. And the final outputs are the probability distribution for different clusters in the clustering process. As Entropy-Based Principle is considered into the feed-forward neural network, a clustering method is born. We have conducted some experiments on six open UCI data sets, comparing with a few baselines and applied purity as the measurement. The results illustrate that our method outperforms all the other baselines that are most popular clustering methods. Keywords: feed-forward neural network, clustering, max-entropy principle, probabilistic modelsProcedia PDF Downloads 288 464 Clustering of Extremes in Financial Returns: A Comparison between Developed and Emerging Markets Authors: Sara Ali Alokley, Mansour Saleh Albarrak Abstract:This paper investigates the dependency or clustering of extremes in the financial returns data by estimating the extremal index value θ∈[0,1]. The smaller the value of θ the more clustering we have. Here we apply the method of Ferro and Segers (2003) to estimate the extremal index for a range of threshold values. We compare the dependency structure of extremes in the developed and emerging markets. We use the financial returns of the stock market index in the developed markets of US, UK, France, Germany and Japan and the emerging markets of Brazil, Russia, India, China and Saudi Arabia. We expect that more clustering occurs in the emerging markets. This study will help to understand the dependency structure of the financial returns data. Keywords: clustring, extremes, returns, dependency, extermal indexProcedia PDF Downloads 273 463 An Energy Efficient Clustering Approach for Underwater Wireless Sensor Networks Authors: Mohammad Reza Taherkhani Abstract:Wireless sensor networks that are used to monitor a special environment, are formed from a large number of sensor nodes. The role of these sensors is to sense special parameters from ambient and to make a connection. In these networks, the most important challenge is the management of energy usage. Clustering is one of the methods that are broadly used to face this challenge. In this paper, a distributed clustering protocol based on learning automata is proposed for underwater wireless sensor networks. The proposed algorithm that is called LA-Clustering forms clusters in the same energy level, based on the energy level of nodes and the connection radius regardless of size and the structure of sensor network. The proposed approach is simulated and is compared with some other protocols with considering some metrics such as network lifetime, number of alive nodes, and number of transmitted data. The simulation results demonstrate the efficiency of the proposed approach.
https://publications.waset.org/abstracts/search?q=k-means%20clustering
The best statistical data analysis techniques for a data scientist to know. Statistical data analysis is a procedure of performing various statistical operations. It’s a kind of quantitative research, which seeks to quantify the data, and generally, applies some form of statistical analysis. Quantitative data involves descriptive data, such as survey data and observational data. Analyzing statistical data usually involves some form of statistical tools, which a layman cannot use without having any statistical knowledge. Here are the best techniques for analyzing statistical data Linear regression Linear regression is the technique used to predict a target variable by providing the best linear relationship between the dependent and independent variables where the best fit indicates the sum of all distances to the middle of the shape and the actual observations at each data point is as minimal as it is achievable. There are mainly two types of linear regression, namely; Simple linear regression: It deploys a single independent variable to predict a dependent variable by providing the most appropriate linear correlation. To understand simple linear regression in detail, click on the link. Multiple linear regression: It takes more than one independent variable to predict the dependent variable by providing the most appropriate linear relationship. There is much more to explore on Multiple Linear Regression, learn with this guide. Classification Being a data mining technique, classification allows specific categories of a collection of data to make more meticulous predictions and analysis. The types of classification techniques are; Logistic regression: A regression analysis technique to be performed when the dependent variable is dichotomous or binary. It is a predictive analysis used to explain the data and the connection between a binary dependent variable and other nominal independent variables. Discriminant analysis: In this analysis, two or more groups (populations) are called a priori and the new set of observations is grouped into one of the known groups based on the calculated characteristics. It displays the distribution of the “X” predictors distinctly in each of the response classes and uses Bayes’ theorem to present these classes in terms of estimates of the probability of the response class, given the value of “X”. Resampling methods The approach of extracting chunks of repeated samples from the actual data samples is known as resampling, which is a nonparametric method of statistical inference. Moreover, based on the original data, it produces a new sampling distribution and uses experimental methods instead of analytical methods to generate a specific sampling distribution. To understand the resampling method, the techniques below should also include; Priming: From the validation of a predictive model and its performance, to overall methods, from the estimation of the bias to the variance of the model, the Bootstrapping technique is used under these conditions. It works by sampling with replacement from the actual data and considers “unselected” data points as test samples. Cross-validation: This technique is used to validate the performance of the model and can be performed by dividing the training data into K parts. When performing cross-validation, part K-1 can be considered as a training ser and the remaining part made up acts as a test set. Up to K times, the process is repeated, then the average of the K scores is accepted as the performance estimate. Tree-based methods Tree-based methods are the most commonly used techniques for regression and classification problems. They incorporate the superposition or detachment of the predictor space in terms of several manageable sections and are also known as decision tree methods because the particular division rules are applied to fragment the predictor space that can be examined. in a tree. Bagging: It decreases the variance of the prediction by producing additional data for training from an actual data set by implementing “combinations with repetitions” to create multiple steps of the size equivalent to that of the data from origin. In reality, the predictive strength of the model cannot be improved by increasing the size of the training set, but the variance can be reduced by tightly fitting the prediction to an expected outcome. Booster: This approach is used to calculate the result through various models and after that the average of the result is calculated by applying a weighted average approach. By integrating the advantages and disadvantages of this approach and a varied weighting formula, appropriate predictive efficiency can be achieved for an extended chain of input data. Unsupervised learning Unsupervised learning techniques come into play and can be applied when groups or categories across the data are not known. Clustering and association rules are common approaches (examples) of unsupervised learning in which various sets of data are assembled into groups (categories) of strictly related elements. Analysis of the main components: PCA supports the generation of a low dimensional illustration of the dataset by recognizing a linear set of the mixture of mutually uncorrelated characteristics having maximum variance. In addition, it helps to acquire latent interaction between variables in an unsupervised setting. K-Means group: Based on the distance from the cluster to the centroid, it separates the data into k dissimilar clusters. Hierarchical classification: By developing a tree structure of clusters, hierarchical clustering helps to develop a hierarchy of clusters at several levels.
https://guay-leroux.com/top-5-statistical-data-analysis-techniques-that-a-data-scientist-should-know/
Data mining (sometimes called data or knowledge discovery) is the process of analyzing data from different perspectives and summarizing it into useful information (see prior blogs including The Data Information Hierarchy series). The term is overused and conjures impressions that do not reflect the true state of the industry. Knowledge Discovery from Databases (KDD) is more descriptive and not as misused – but the base meaning is the same. Nevertheless, this definition of data mining is a very general definition and does not convey the different aspects of data mining / knowledge discovery. The basic types of Data Mining are: - Descriptive data mining, and - Predictive data mining Descriptive Data Mining generally seeks groups, subgroups and clusters. Algorithms are developed that draw associative relationships from which actionable results may be derived. (ie. a diamond head snake should be considered poisonous.) Generally, a descriptive data mining result will appear as a series of if – then – elseif – then … conditions. Alternatively, a system of scoring may be used much like some magazine based self assessment exams. Regardless of the approach, the end result is a clustering of the samples with some measure of quality. Predictive Data Mining is then performing an analysis on previous data to derive a prediction to the next outcome. For example: new business incorporation tend to look for credit card merchant solutions. This may seem obvious, but someone had to discover this tendency – and then exploit it. Data mining is ready for application in the business community because it is supported by three technologies that are now sufficiently mature: 1) massive data collection, 2) powerful multiprocessor computers, and 3) data mining algorithms (http://www.thearling.com/text/dmwhite/dmwhite.htm). Kurt Thearling identifies five type od data mining: (definitions taken from Wikipedia) A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. Decision trees are commonly used in operations research, specifically in decision analysis, to help identify a strategy most likely to reach a goal. If in practice decisions have to be taken online with no recall under incomplete knowledge, a decision tree should be paralleled by a Probability model as a best choice model or online selection model algorithm. Another use of decision trees is as a descriptive means for calculating conditional probabilities. Nearest neighbour or shortest distance is a method of calculating distances between clusters in hierarchical clustering. In single linkage, the distance between two clusters is computed as the distance between the two closest elements in the two clusters. The term neural network was traditionally used to refer to a network or circuit of biological neurons. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes. Rule induction is an area of machine learning in which formal rules are extracted from a set of observations. The rules extracted may represent a full scientific model of the data, or merely represent local patterns in the data. Cluster analysis or clustering is the task of assigning a set of objects into groups (called clusters) so that the objects in the same cluster are more similar (in some sense or another) to each other than to those in other clusters.
https://profreynolds.com/2012/03/10/thoughts-on-data-mining/
As a companion to my recent post “Correlation versus Causation: The Science, Art, and Magic of Experimental Design”, I wanted to offer a more technical exposition concerning data science approaches to focused causal model development. A fundamental question faced by business analytics professionals and data scientists is whether they have a working correlative and causal explanatory model related to the phenomenon they are observing, be it related to reducing manufacturing error rates, determining the cause of customer abandonment, reducing fraud, targeting marketing, realizing logistics efficiencies, etc. This is known as an experimental model in science or a conceptual model in broader research venues (i.e. social sciences). The increasing interest in analytics has led to a proliferation of powerful tools. The new tools increase the ease of conducting sophisticated data analysis. However, the danger is that inexperienced analysts take a shotgun approach, throwing data at a tool and leaping at any hints of statistical causal significance that emerges. For example, a sophomore data analyst might rush to notify management that s/he detected a strong correlation between the marketing budget and revenues, suggesting the marketing budget should be increased as much as possible. With a deeper examination of marketing efficacy in relation to mediating factors (i.e. macroeconomic trends, demographic features, competitive forces, trending consumer preferences, seasonality, weather), one will realize that marketing expenditures are rarely a constant direct causal agent in revenue growth (and when a strong factor, only temporary in scope). Otherwise, marketing would have an infinite budget and run most companies (though this might not stop them from trying to assert this right). A fundamental questions analytics and data professionals should ask is whether, at any particular point, there are sufficient grounds, based upon statistical significance, to apply a proposed causal model into operational use (i.e. to recommend a decision path based on descriptive or predictive analysis, or to operationalize a prescriptive algorithm). In other words, if there is a working causal hypothesis or practical model in place, has it been sufficiently tested to establish statistical significance and validity? Much of this comes down to whether a structured analytical process was followed to establish experimental validity (statistical significance) for the experimental / conceptual model. Where did the experimental / conceptual model come from? Were the proper experts consulted? Were alternative explanations / hypothesis properly considered? Was there a deep enough examination of mediating and moderating variables and grounds for the establishment of direct causation as opposed to correlation? Are there hidden, more fundamental factors at play that have been missed? As an example of the importance of drilling-down to fundamental causes, while we could stop by saying, for instance, “it is good for a surgeon to clean his/her hands after operating because we notice less subsequent infections”, we now know (because of sound scientific inquiry) that the causal factor of infection is bacterial and viral agents. A deeper understanding of microbiology (in particular viruses and bacteria) allows us to also prescribe the sterilization of operating instruments and the operating theater. As well, we know the simple efficacy of washing one’s hands in reducing the transmission of the flu. When we stop at noticing a correlation (i.e. clean hands = less infections), we not only forego broader understanding, but we potentially continue perpetrating serious errors (i.e. not sterilizing surgical instruments). To turn our attention more directly to data science and analytics, the analysis of data should follow a methodical process of iteratively strengthening a conceptual model through staged statistical and algorithmic analysis. A major division concerns whether there are suitable grounds for segmenting a dataset prior to applying statistical analysis (i.e. customers, manufacturing errors, credit card transactions, etc.), or whether there is lack of understanding concerning the operative correlative factors in terms of grouping. The segmentation of the following two fundamental approaches should provide some guidance in the development of an operational explanatory model. If there is little understanding of the nature of the correlative factors which suggest groupings or clusters of phenomenon (i.e. customer categories), unsupervised learning should be applied to segment or cluster fundamental statistical categories. If there is an understanding of fundamental categories, supervised learning should be used to profile segmented groups for prediction and prescriptive treatments. 1. Unsupervised Learning: segmentation / clustering Unsupervised learning to cluster or segment a dataset should be the first step if there is no working understanding of the phenomenon at play (the base correlative interaction of variables) and no classification labels. Such an approach should be used in cases where there is a large enough dataset and there is a core phenomenon of interest (i.e. declining sales, increasing fraud), but no clear primary understanding of how the component variables correlate amongst themselves (i.e. how meaningful groups of customers are identified, how observed variables on the assembly line contribute to the error rate or not). Unsupervised techniques are those techniques that aggregate patterns based on statistical similarity. Such an approach is applicable where there is no labeled training set (i.e. where those groups of customers who are at risk for fraud are not yet segmented into meaningful groups). Clustering algorithms are specifically used to identify unique segments of a population and to depict the common attributes of members of a cluster in relation to the target phenomenon. The goal is to generate or extract classification labels automatically (hence the term unsupervised). This approach is useful when an analyst has no idea how to segment the population (i.e. customers) in relation to the phenomenon (i.e. purchasing or fraud). Running a clustering technique is a good first step to see how the elements in a dataset related into unique groups. Such techniques are used regularly in marketing analysis to extrapolate meaningful categories of customers which then can be targeted independently according to unique sales and marketing messages. The types of statistical / analytical techniques available include: o Hierarchical cluster analysis o O-Cluster (proprietary Oracle algorithm) • Kohonen Networks / Self-Organizing Maps 2. Supervised Learning: profiling segmented groups Once a training set has been labeled (i.e. customers who have purchased or not purchased in last year have been identified in particular customer groups, distinct groups of customers interested in a product have been segmented), supervised learning techniques can be applied. At this point, there is an existing notion of how to segment the population (i.e. customers) and the analyst would like to implement some type of automatic procedure or operational approach (i.e. automatic fraud risk assessment, marketing messaging). Supervised techniques learn a pre-defined answer based on the segmented groups (i.e. fraud / non-fraud customers, buying / non-buying sales prospects) and provide a method for new instances to be assessed based on the ‘trained’ algorithm, structure, or facility. A. Classical Supervised Learning CLASSICAL methods are useful in determining how classification is made, explaining how the model is composed, or determining what influencers can be centrally attributed to a category (i.e. factors which predispose fraudulent behavior). Such techniques are useful for gaining a better understanding of the specific causal and correlative factors, and via this understanding to guide decision making related to future phenomenon. Thus, this type of evaluation has both explanatory and predictive power, allowing for prescriptive operationalization as well as progressive targeting and micro-segmentation. • Structural equation modeling • K-Means • Recency, Frequency, Monetary (RFM) (customer value) • LDA (Linear Discriminant Analysis) • Decision Trees (DT) o CHi-squared Automatic Interaction Detection (CHAID) o Boosted trees / gradient boosting o C&RT/ CART (Classification & regression trees) o QUEST / Supervised learning in quest (SLIQ) B. Advanced Supervised Learning ADVANCED methods allow for automation when the analyst is not interested in explanatory logic, just operationalizing prediction: a prescriptive solution. Such approaches allow for automated procedures such as real-time online customer approval or real-time flow control on an assembly line. • Support Vector Machine (Linear and Kernel) / Support Vector Networks • Ensembles / Ensemble Learning Bringing It All Together: Continual Refinement Taken together, this range of unsupervised and supervised techniques ideally iterates in a cyclical fashion to refine progressive understanding and to optimize actions. A segmentation strategy should evolve over time and incorporate feedback from earlier cycles, continually assessing new instances to modify the model. The segmentation results can then be used to refine profiling and action. While an initial cycle may be forced to rely on unsupervised learning, plan (if possible) to track the behavior/response of the outcomes (i.e. the behavior / reaction of customers or the result of an error reduction technique). You can then use that data to generate a subsequent prediction concerning the likelihood of a positive response (in an iterative fashion). On a future sample, you can generate a predicted likelihood of success for each segment. For segments with low likelihood, you can pilot a different approach (changing the message, channel, incentive, etc.) and measure if the observed response exceeds the expected level. Continuing to follow this process over multiple cycles, an “optimized” strategy (i.e. marketing approach, fraud reduction approach) should emerge where each segment is targeted with the type of treatment (i.e. marketing communication, credit approval) most likely to yield positive results with the ever refined segment.
https://sctr7.com/2013/08/17/data-science-as-an-experimental-process-unsupervised-and-supervised-learning/?replytocom=555
In addition, we propose many machine learning models that serve as contributions to solve a biological problem. First, we present Zseq, a linear time method that identifies the most informative genomic sequences and reduces the number of biased sequences, sequence duplications, and ambiguous nucleotides. Zseq finds the complexity of the sequences by counting the number of unique k-mers in each sequence as its corresponding score and also takes into the account other factors, such as ambiguous nucleotides or high GC-content percentage in k-mers. Based on a z-score threshold, Zseq sweeps through the sequences again and filters those with a z-score less than the user-defined threshold. Zseq is able to provide a better mapping rate; it reduces the number of ambiguous bases significantly in comparison with other methods. Evaluation of the filtered reads has been conducted by aligning the reads and assembling the transcripts using the reference genome as well as de novo assembly. The assembled transcripts show a better discriminative ability to separate cancer and normal samples in comparison with another state-of-the-art method. Studying the abundance of select mRNA species throughout prostate cancer progression may provide some insight into the molecular mechanisms that advance the disease. In the second contribution of this dissertation, we reveal that the combination of proper clustering, distance function and Index validation for clusters are suitable in identifying outlier transcripts, which show different trending than the majority of the transcripts, the trending of the transcript is the abundance throughout different stages of prostate cancer. We compare this model with standard hierarchical time-series clustering method based on Euclidean distance. Using time-series profile hierarchical clustering methods, we identified stage-specific mRNA species termed outlier transcripts that exhibit unique trending patterns as compared to most other transcripts during disease progression. This method is able to identify those outliers rather than finding patterns among the trending transcripts compared to the hierarchical clustering method based on Euclidean distance. A wet-lab experiment on a biomarker (CAM2G gene) confirmed the result of the computational model. Genes related to these outlier transcripts were found to be strongly associated with cancer, and in particular, prostate cancer. Further investigation of these outlier transcripts in prostate cancer may identify them as potential stage-specific biomarkers that can predict the progression of the disease. Breast cancer, on the other hand, is a widespread type of cancer in females and accounts for a lot of cancer cases and deaths in the world. Identifying the subtype of breast cancer plays a crucial role in selecting the best treatment. In the third contribution, we propose an optimized hierarchical classification model that is used to predict the breast cancer subtype. Suitable filter feature selection methods and new hybrid feature selection methods are utilized to find discriminative genes. Our proposed model achieves 100% accuracy for predicting the breast cancer subtypes using the same or even fewer genes. Studying breast cancer survivability among different patients who received various treatments may help understand the relationship between the survivability and treatment therapy based on gene expression. In the fourth contribution, we have built a classifier system that predicts whether a given breast cancer patient who underwent some form of treatment, which is either hormone therapy, radiotherapy, or surgery will survive beyond five years after the treatment therapy. Our classifier is a tree-based hierarchical approach that partitions breast cancer patients based on survivability classes; each node in the tree is associated with a treatment therapy and finds a predictive subset of genes that can best predict whether a given patient will survive after that particular treatment. We applied our tree-based method to a gene expression dataset that consists of 347 treated breast cancer patients and identified potential biomarker subsets with prediction accuracies ranging from 80.9% to 100%. We have further investigated the roles of many biomarkers through the literature. Studying gene expression through various time intervals of breast cancer survival may provide insights into the recovery of the patients. Discovery of gene indicators can be a crucial step in predicting survivability and handling of breast cancer patients. In the fifth contribution, we propose a hierarchical clustering method to separate dissimilar groups of genes in time-series data as outliers. These isolated outliers, genes that trend differently from other genes, can serve as potential biomarkers of breast cancer survivability. In the last contribution, we introduce a method that uses machine learning techniques to identify transcripts that correlate with prostate cancer development and progression. We have isolated transcripts that have the potential to serve as prognostic indicators and may have significant value in guiding treatment decisions. Our study also supports PTGFR, NREP, scaRNA22, DOCK9, FLVCR2, IK2F3, USP13, and CLASP1 as potential biomarkers to predict prostate cancer progression, especially between stage II and subsequent stages of the disease. Abedalrhman, Alkhateeb, "Machine Learning Approaches for Cancer Analysis" (2018). Electronic Theses and Dissertations. 7597.
https://scholar.uwindsor.ca/etd/7597/
That was quite a learning curve for me. I quickly realized as a data scientist how important it is to segment customers so my organization can tailor and build targeted strategies. This is where the concept of clustering came in ever so handy! Problems like segmenting customers are often deceptively tricky because we are not working with any target variable in mind. We are officially in the land of unsupervised learning where we need to figure out patterns and structures without a set outcome in mind. It’s both challenging and thrilling as a data scientist. Now, there are a few different ways to perform clustering (as you’ll see below). I will introduce you to one such type in this article – hierarchical clustering. We will learn what hierarchical clustering is, its advantage over the other clustering algorithms, the different types of hierarchical clustering and the steps to perform it. We will finally take up a customer segmentation dataset and then implement hierarchical clustering in Python. I love this technique and I’m sure you will too after this article! Note: As mentioned, there are multiple ways to perform clustering. I encourage you to check out our awesome guide to the different types of clustering: To learn more about clustering and other machine learning algorithms (both supervised and unsupervised) check out the following comprehensive program- Table of Contents - Supervised vs Unsupervised Learning - Why Hierarchical Clustering? - What is Hierarchical Clustering? - Types of Hierarchical Clustering - Agglomerative Hierarchical Clustering - Divisive Hierarchical Clustering - Steps to perform Hierarchical Clustering - How to Choose the Number of Clusters in Hierarchical Clustering? - Solving a Wholesale Customer Segmentation Problem using Hierarchical Clustering Supervised vs Unsupervised Learning It’s important to understand the difference between supervised and unsupervised learningunsupervised learning before we dive into hierarchical clustering. Let me explain this difference using a simple example. Suppose we want to estimate the count of bikes that will be rented in a city every day: Or, let’s say we want to predict whether a person on board the Titanic survived or not: We have a fixed target to achieve in both these examples: - In the first example, we have to predict the count of bikes based on features like the season, holiday, workingday, weather, temp, etc. - We are predicting whether a passenger survived or not in the second example. In the ‘Survived’ variable, 0 represents that the person did not survive and 1 means the person did make it out alive. The independent variables here include Pclass, Sex, Age, Fare, etc. So, when we are given a target variable (count and Survival in the above two cases) which we have to predict based on a given set of predictors or independent variables (season, holiday, Sex, Age, etc.), such problems are called supervised learning problems. Let’s look at the figure below to understand this visually: Here, y is our dependent or target variable, and X represents the independent variables. The target variable is dependent on X and hence it is also called a dependent variable. We train our model using the independent variables in the supervision of the target variable and hence the name supervised learning. Our aim, when training the model, is to generate a function that maps the independent variables to the desired target. Once the model is trained, we can pass new sets of observations and the model will predict the target for them. This, in a nutshell, is supervised learning. There might be situations when we do not have any target variable to predict. Such problems, without any explicit target variable, are known as unsupervised learning problems. We only have the independent variables and no target/dependent variable in these problems. We try to divide the entire data into a set of groups in these cases. These groups are known as clusters and the process of making these clusters is known as clustering. This technique is generally used for clustering a population into different groups. A few common examples include segmenting customers, clustering similar documents together, recommending similar songs or movies, etc. There are a LOT more applications of unsupervised learning. If you come across any interesting application, feel free to share them in the comments section below! Now, there are various algorithms that help us to make these clusters. The most commonly used clustering algorithms are K-means and Hierarchical clustering. Why Hierarchical Clustering? We should first know how K-means works before we dive into hierarchical clustering. Trust me, it will make the concept of hierarchical clustering all the more easier. Here’s a brief overview of how K-means works: - Decide the number of clusters (k) - Select k random points from the data as centroids - Assign all the points to the nearest cluster centroid - Calculate the centroid of newly formed clusters - Repeat steps 3 and 4 It is an iterative process. It will keep on running until the centroids of newly formed clusters do not change or the maximum number of iterations are reached. But there are certain challenges with K-means. It always tries to make clusters of the same size. Also, we have to decide the number of clusters at the beginning of the algorithm. Ideally, we would not know how many clusters should we have, in the beginning of the algorithm and hence it a challenge with K-means. This is a gap hierarchical clustering bridges with aplomb. It takes away the problem of having to pre-define the number of clusters. Sounds like a dream! So, let’s see what hierarchical clustering is and how it improves on K-means. What is Hierarchical Clustering? Let’s say we have the below points and we want to cluster them into groups: We can assign each of these points to a separate cluster: Now, based on the similarity of these clusters, we can combine the most similar clusters together and repeat this process until only a single cluster is left: We are essentially building a hierarchy of clusters. That’s why this algorithm is called hierarchical clustering. I will discuss how to decide the number of clusters in a later section. For now, let’s look at the different types of hierarchical clustering. Types of Hierarchical Clustering There are mainly two types of hierarchical clustering: - Agglomerative hierarchical clustering - Divisive Hierarchical clustering Let’s understand each type in detail. Agglomerative Hierarchical Clustering We assign each point to an individual cluster in this technique. Suppose there are 4 data points. We will assign each of these points to a cluster and hence will have 4 clusters in the beginning: Then, at each iteration, we merge the closest pair of clusters and repeat this step until only a single cluster is left: We are merging (or adding) the clusters at each step, right? Hence, this type of clustering is also known as additive hierarchical clustering. Divisive Hierarchical Clustering Divisive hierarchical clustering works in the opposite way. Instead of starting with n clusters (in case of n observations), we start with a single cluster and assign all the points to that cluster. So, it doesn’t matter if we have 10 or 1000 data points. All these points will belong to the same cluster at the beginning: Now, at each iteration, we split the farthest point in the cluster and repeat this process until each cluster only contains a single point: We are splitting (or dividing) the clusters at each step, hence the name divisive hierarchical clustering. Agglomerative Clustering is widely used in the industry and that will be the focus in this article. Divisive hierarchical clustering will be a piece of cake once we have a handle on the agglomerative type. Steps to Perform Hierarchical Clustering We merge the most similar points or clusters in hierarchical clustering – we know this. Now the question is – how do we decide which points are similar and which are not? It’s one of the most important questions in clustering! Here’s one way to calculate similarity – Take the distance between the centroids of these clusters. The points having the least distance are referred to as similar points and we can merge them. We can refer to this as a distance-based algorithm as well (since we are calculating the distances between the clusters). In hierarchical clustering, we have a concept called a proximity matrix. This stores the distances between each point. Let’s take an example to understand this matrix as well as the steps to perform hierarchical clustering. Setting up the Example Suppose a teacher wants to divide her students into different groups. She has the marks scored by each student in an assignment and based on these marks, she wants to segment them into groups. There’s no fixed target here as to how many groups to have. Since the teacher does not know what type of students should be assigned to which group, it cannot be solved as a supervised learning problem. So, we will try to apply hierarchical clustering here and segment the students into different groups. Let’s take a sample of 5 students: Creating a Proximity Matrix First, we will create a proximity matrix which will tell us the distance between each of these points. Since we are calculating the distance of each point from each of the other points, we will get a square matrix of shape n X n (where n is the number of observations). Let’s make the 5 x 5 proximity matrix for our example: The diagonal elements of this matrix will always be 0 as the distance of a point with itself is always 0. We will use the Euclidean distance formula to calculate the rest of the distances. So, let’s say we want to calculate the distance between point 1 and 2: √(10-7)^2 = √9 = 3 Similarly, we can calculate all the distances and fill the proximity matrix. Steps to Perform Hierarchical Clustering Step 1: First, we assign all the points to an individual cluster: Different colors here represent different clusters. You can see that we have 5 different clusters for the 5 points in our data. Step 2: Next, we will look at the smallest distance in the proximity matrix and merge the points with the smallest distance. We then update the proximity matrix: Here, the smallest distance is 3 and hence we will merge point 1 and 2: Let’s look at the updated clusters and accordingly update the proximity matrix: Here, we have taken the maximum of the two marks (7, 10) to replace the marks for this cluster. Instead of the maximum, we can also take the minimum value or the average values as well. Now, we will again calculate the proximity matrix for these clusters: Step 3: We will repeat step 2 until only a single cluster is left. So, we will first look at the minimum distance in the proximity matrix and then merge the closest pair of clusters. We will get the merged clusters as shown below after repeating these steps: We started with 5 clusters and finally have a single cluster. This is how agglomerative hierarchical clustering works. But the burning question still remains – how do we decide the number of clusters? Let’s understand that in the next section. How should we Choose the Number of Clusters in Hierarchical Clustering? Ready to finally answer this question that’s been hanging around since we started learning? To get the number of clusters for hierarchical clustering, we make use of an awesome concept called a Dendrogram. A dendrogram is a tree-like diagram that records the sequences of merges or splits. Let’s get back to our teacher-student example. Whenever we merge two clusters, a dendrogram will record the distance between these clusters and represent it in graph form. Let’s see how a dendrogram looks like: We have the samples of the dataset on the x-axis and the distance on the y-axis. Whenever two clusters are merged, we will join them in this dendrogram and the height of the join will be the distance between these points. Let’s build the dendrogram for our example: Take a moment to process the above image. We started by merging sample 1 and 2 and the distance between these two samples was 3 (refer to the first proximity matrix in the previous section). Let’s plot this in the dendrogram: Here, we can see that we have merged sample 1 and 2. The vertical line represents the distance between these samples. Similarly, we plot all the steps where we merged the clusters and finally, we get a dendrogram like this: We can clearly visualize the steps of hierarchical clustering. More the distance of the vertical lines in the dendrogram, more the distance between those clusters. Now, we can set a threshold distance and draw a horizontal line (Generally, we try to set the threshold in such a way that it cuts the tallest vertical line). Let’s set this threshold as 12 and draw a horizontal line: The number of clusters will be the number of vertical lines which are being intersected by the line drawn using the threshold. In the above example, since the red line intersects 2 vertical lines, we will have 2 clusters. One cluster will have a sample (1,2,4) and the other will have a sample (3,5). Pretty straightforward, right? This is how we can decide the number of clusters using a dendrogram in Hierarchical Clustering. In the next section, we will implement hierarchical clustering which will help you to understand all the concepts that we have learned in this article. Solving the Wholesale Customer Segmentation problem using Hierarchical Clustering Time to get our hands dirty in Python! We will be working on a wholesale customer segmentation problem. You can download the dataset using this link. The data is hosted on the UCI Machine Learning repository. The aim of this problem is to segment the clients of a wholesale distributor based on their annual spending on diverse product categories, like milk, grocery, region, etc. Let’s explore the data first and then apply Hierarchical Clustering to segment the clients. We will first import the required libraries: Load the data and look at the first few rows: There are multiple product categories – Fresh, Milk, Grocery, etc. The values represent the number of units purchased by each client for each product. Our aim is to make clusters from this data that can segment similar clients together. We will, of course, use Hierarchical Clustering for this problem. But before applying Hierarchical Clustering, we have to normalize the data so that the scale of each variable is the same. Why is this important? Well, if the scale of the variables is not the same, the model might become biased towards the variables with a higher magnitude like Fresh or Milk (refer to the above table). So, let’s first normalize the data and bring all the variables to the same scale: Here, we can see that the scale of all the variables is almost similar. Now, we are good to go. Let’s first draw the dendrogram to help us decide the number of clusters for this particular problem: The x-axis contains the samples and y-axis represents the distance between these samples. The vertical line with maximum distance is the blue line and hence we can decide a threshold of 6 and cut the dendrogram: We have two clusters as this line cuts the dendrogram at two points. Let’s now apply hierarchical clustering for 2 clusters: We can see the values of 0s and 1s in the output since we defined 2 clusters. 0 represents the points that belong to the first cluster and 1 represents points in the second cluster. Let’s now visualize the two clusters: Awesome! We can clearly visualize the two clusters here. This is how we can implement hierarchical clustering in Python. End Notes Hierarchical clustering is a super useful way of segmenting observations. The advantage of not having to pre-define the number of clusters gives it quite an edge over k-Means. If you are still relatively new to data science, I highly recommend taking the Applied Machine Learning course. It is one of the most comprehensive end-to-end machine learning courses you will find anywhere. Hierarchical clustering is just one of a diverse range of topics we cover in the course. What are your thoughts on hierarchical clustering? Do you feel there’s a better way to create clusters using less computational resources? Connect with me in the comments section below and let’s discuss!
https://www.analyticsvidhya.com/blog/2019/05/beginners-guide-hierarchical-clustering/
Asthma and most chronic airway diseases are heterogeneous entities. Even severe asthma does not represent a single phenotype of asthma. This heterogeneity of phenotypes is the starting point of new approaches for characterisation, understanding and management of asthma in the near future. The ultimate approach will be to identify new phenotypes sharing coherent underlying biological mechanisms (i.e. the concept of endotypes ) to better predict future risks. Ultimately, new specific, targeted or personalised therapeutic avenues and management will be developed based on these new groupings of patients and used . The main challenge is to avoid pre-established hypotheses. An unbiased approach of phenotyping is appealing in view of these ambitious goals. The increased popularity of clusters analysis in asthma benefits from the development of cohorts of asthmatic patients worldwide [1, 3–5]. For example, clusters derived from the Severe Asthma Research Program (SARP) initiative in the USA have been well received and disseminated within the severe asthma community . This phenomenon can be explained by different factors: the approach was found original, the expert centres involved were excellent, and the findings and statistics were of great value. The items chosen in the hierarchical tree accounted for 85% of the variance of the clusters. Despite important interest demonstrated by specialists, most physicians are unable to fully understand the clustering process, and its translation in the real world to treat severe asthma is not obvious. The hypothesis-driven clustering approach used in the UK led the authors to propose a cluster-based specific management built on the coherence between eosinophilic inflammation based on cellular percentage in induced sputum, and inhaled corticosteroid (ICS)-dose adjustment. In some small subgroups of patients, they used their results to safely step down ICS in non-eosinophilic patients, mainly obese females with uncontrolled asthma. In eosinophilic asthma patients, the induced-sputum based ICS-dose management improved asthma control, leading to a decrease in the numbers of severe asthma exacerbations. Nevertheless, we expect the clustering approaches to anticipate future risks (exacerbation, decline in lung function) and try to translate personalised medicine into reality. Physicians' daily practice is a real phenotyping exercise. This solid experience raises suspicion towards non-human statistical methods intended to validate our daily expertise. Usually, clustering reduces data to their mean differences, which does not necessarily reflect reality, especially the complexity of biological data. Moreover, this method is limited by the quality of data implementation. Implementing complex data (clinical history and follow-up, patients' outcomes, imaging and complex biology such as -omics approaches) will potentially lead to one cluster for each single individual with his/her unique phenotype. Other methods have been tested before Ward's clustering approach, such as the principal component analysis and the varimax rotations. They led to meaningful reports but to date, none has really surpassed the others. Lastly, a holistic approach is advocated under the “unsupervised unbiased” label to avoid the selection bias introduced by the a priori given definitions of the disease. In this issue of the European Respiratory Journal, Kim et al. successfully used a clustering method to reach a potential “Holy Grail”. For this purpose, they gathered data derived from two Korean severe asthma cohorts for a total of more than 2500 patients. They described four phenotypes between both cohorts who shared very similar patterns. This manuscript is of interest as it reports, for the first time, a clustering approach in asthma arising from Asia. Asthma was defined based on a World Health Organization definition and the data were collected on different occasions, which strengthened the impact of the study. The four clusters were mostly discriminated by a three-axis components’ scattergram comprising forced expiratory volume in 1 s (FEV1), age of onset and smoking. The latter is not often reported in studies as smoking is usually considered as an exclusion criterion in asthma; however, smoking and asthma represents a frequent clinical challenge for clinicians . Accordingly, this clustering approach may be more applicable to our daily practice. The four clusters are reproduced in two large cohorts, they are relevant, and efficiently discriminate patients based on their past history (duration of the disease without anti-inflammatory treatment) and clinical examination (body mass index, rhinitis). Longitudinal assessment was used to investigate lung function decline. The authors report that pre-bronchodilator FEV1 does not decline in the follow-up year in all the clusters. A potential limitation of the present study is the choice of items which contribute to the clusters, including the absence of the consumption of healthcare resources, comorbid conditions, inflammation and, obviously, treatments. The change in treatment and management may affect the validity of the longitudinal findings. Furthermore, this is a supervised cluster analysis, which dampens the potential comparison with other cluster reports, such as SARP. Lastly, the clustering approach is an elegant way to better understand asthma using cohort data collection, yet clinical use in a daily practice in a primary or secondary care setting remains a matter of debate. As long as clinicians remain capable of seeing human beings as unique individuals the scientific substrate for personalised medicine will remain their responsibility. Otherwise, predictive medicine, supported by any statistical method, will irremediably lose its credibility: “back to the trees” . Footnotes Statement of Interest None declared.
https://erj.ersjournals.com/content/41/6/1247
After a great weekend with our study group we’ve finished K-Means Clustering and Hierarchical Clustering. In Classification, your model tries to predict two or more labels, that you already know of. (e.g. it learns from past customer data if a future customer is going to buy your product or not) Clustering is explorative. You don’t know the output. What the model does is it puts data with certain patterns in clusters. The important thing is figuring out the appropriate number of clusters. For K-Means Clustering we used the Elbow method to find the optimal number. For Hierarchical Clustering we built the so called Dendrograms. It was quite interesting to apply this method and we were all pretty excited about it. K-Means was simple to understand and easily adaptable. It works well on both, small and large datasets. Hierarchical Clustering is not appropriate for large datasets, but the optimal number of clusters can be obtained by the model itself. We’re ready to move on. A quick note: We’re a group of people with diverse backgrounds, ranging from software engineering to computational linguistics and business. We’re following the Udemy Machine Learning A-Z™: Hands-On Python & R in Data Science. This course is giving us a good overview, without going too deep. After completion we will be able to dive deeper into specific fields of interests, such as robotics, reinforcement learning and NLP.
https://machinelearningtokyo.com/2017/08/29/332/
Doctorante et Assistante de cours en Gestion (USL-B) Projet de Thèse : "Integrating Knowledge Management In Prediction Techniques : Impact Of Online Social Networks On Academic Achievement" Summary of project The impact of individual and situational characteristics in the prediction of achievement in school and university has been studied through numerous researches. On the other hand, social networks and online tools are now integral part of our lives, and it becomes more and more necessary to include these features in the achievement's prediction. Two methodological aspects of the research conducted on such increasingly complex networks have drawn our attention. The first aspect covers knowledge management (e.g., through the use of ontology) that allows us to build and maintain a body of knowledge to which this information contributes. Second, data analysis (e.g., data mining tools, clustering techniques), which allows us to reduce large amounts of data to concise information. The objective of this research project would be to cluster students present in social networks and online social networks (especially an official online platform dedicated to university classes) given their links with other students. To achieve this goal, we would look for the most appropriate clustering techniques to analyze a given set of data, among segmentation’s algorithms (e.g., Hierarchical agglomerative clustering, K-means) and different distances/similarities between the nodes of a graph (e.g., Euclidean Commute Time Distance, Minimax Path-Based Dissimilarity Measure). In order to increase the accuracy of our predictions, we would use the outcomes of these analyses to validate a knowledge base (i.e., an ontology) of the network, built in order to represent a mental and theoretical model which concerns a students community. In a final stage, these clusters will be used to analyze and predict the achievement of students composing the network. Their success will also be studied according their individual characteristics, by means of modeling techniques (e.g., hierarchical modeling, logistic regression ...). At a later stage, we might be able to refine and apply these techniques to predict other behaviors. A potential outcome might be that, based on these research results, universities could develop politics promoting an optimal use of official online tools.
http://casper-usaintlouis.be/membres/kristel-vignery
This position will be part of the Visa Security Engineering team, where as a software engineer first, you will be building and delivering cybersecurity products and capabilities utilizing strong software principles and implementing defensive security controls utilizing machine learning. Build, design, engineer, and develop software and services that deliver security functionality and improve security efficiency and capabilities through automation. The candidate needs to possess software engineering skills that allow them to build new capabilities and solutions vs. simply integrating an existing open source platform or operating a vendor solution. Utilize information retrieval, data analytics, and statistical modeling techniques to build new machine learning models and apply it to cybersecurity usecases. This will include the entire lifecycle including collection of data, feature engineering, model development, training, testing, and deployment into production. Rewriting of any existing code using more robust type-safe and memory safe languages while incorporating faster data piping and parallel processing frameworks. ​​Develop prototypes and algorithms (e.g. searching, sorting, optimization, dynamic programming) while performing data engineering tasks around aggregation and data synthetization from a multitude of structured and unstructured data sources.. Assist in shaping overall direction, life-cycle management, and leadership for Information Security architecture and technology related to Visa. Create requirement and design documents that account for security risks in new or existing systems with architectures to mitigate them within risk appetite. Present results to a cross section of employees, including senior leaders at Visa. Utilize graduate-level research and analysis skills. The candidate must posses strong software engineering skills as the key primary. Applied Machine learning is a close second. Knowledge of security related concepts is a plus but not required. This is not a research but an engineering function. Database systems, data structures, algorithms, operating systems, and their application during software engineering or security related services.
https://www.smartrecruiters.com/Visa/743999681581663?oga=true
I am a senior at UC San Diego pursuing a Bachelor of Science in mathematics and computer science, focusing on machine learning and quantitative research. I'll be joining BlackRock as an analyst in July 2019. Last summer I worked at BlackRock as a summer analyst on their ETF & Index Investments research team working on natural language processing. At UC San Diego, I am a TA/Tutor for the Computer Science & Engineering department and an undergraduate researcher at the Mathematical Neuroscience lab . Prior to that, I interned at CareFusion BD as a data scientist working on time-series forecasting and machine learning models. Team: ETF & Index Investments Global Research and Analytics Developed time-series forecasting, machine learning models to predict drug shortages, and price changes. Effectively analyzed and visualized datatsets with more than 10 Million drug usage and transaction records. Used dimension-reduction techniques (PCA, SVD, LDA), fourier and log transformations and resampling techniques (bagging, boosting) to identify correlations between variables and extract underlying patterns of the data. Worked with multiple regression and classificatoin models - linear models, ARIMA, boosted trees (xgboost), random forests and SVMs. Tutor for Object-Oriented Programming (CSE 11) and Data Structures (CSE 12). Worked with the instructor to design and write programming assignments and their specifications. Held office hours and led review sessions to explain programming concepts and assist students in implementing programming assignments by analyzing and debugging their code. Graded homework, exams and wrote submission/grading scripts for programming assignments. Wrote python scripts to setup a Continuous Integration server to automate package builds. Developed multiple native Linux (Ubuntu, CentOS, Debian) packages using bash and python for Kolibri - Learning Equality's flagship application. Optimized software setup on all platforms by implementing efficient installation scripts. Developing neural derived and neuro-mimetic machine learning algorithms. Constructing complex and dynamic artificial neural networks by incorporating neural features such as propagation decay, geometric information and refractory periods. Writing python code to generate and train such neural networks, run experiments and analyze results. Collaborating with Microsoft’s Special Projects division. Used underwater sound pressure from active and passive sources to train ML models for various applications. Developed deep spatio-temporal (Convolutional LSTM) neural networks to predict ship paths. Support vector machines (SVMs) are an extremely powerful machine learning tool to solve various classification problems. Not only are they less prone to over-fitting due to large margins, but they are also easy to optimize due to their convex nature. In this paper we will review both soft and hard margin formulations of linear SVMs. First, we discuss how to solve soft-margin SVMs via dual formulation, and justify how the dual problem will in-fact give the optimal solution of primal form. Then, we discuss kernel tricks to solve non-linear classification using convex optimization. Finally, we perform classification on real-world data using both non-linear and linear SVMs using the algorithms devised prior. Analysis of the negative effects of Gentrification in San Diego in the 21st Century. Visualized the change in demographics of all neighborhoods in San Diego using heat maps. Identified neighborhoods effected the most by Gentrification and found patterns between multiple socio-economic factors such as Poverty, Population, Uninsurance and Property value. Languages/Tools Used: Python (Pandas, NumPy, Matplotlib, Patsy), Jupyter Notebooks. Data science powered web application to perform sentiment analysis on YouTube comments. Applied machine learning techniques on the model using a training dataset of 1 Million tweets. Wrote python scripts for web scraping and performing sentiment analysis on the comments. Languages/Tools Used: Python, Natural Language Toolkit, Flask. Android application to ease the process of connecting with people on multiple social media. Integrated database, added Location tracking and developed the app structure. Languages/Tools Used: Java (Android), XML, Google Firebase.
http://www.arkin.xyz/
, being the latter the most popular nowadays given its performance. Most machine translation strategies build an encoder architecture that maps a text sequence in a source language to a vector representation and a decoder architecture that maps the vector representation to the same text sequence but in the target language, reducing the machine translation task to a purely bilingual task. The performance of this bilingual task is conditioned by two main factors: (1) quality/quantity of available parallel corpora, and (2) the similarity between the source and target language, defined in terms of the amount of linguistic patterns that both languages have in common. The more and better quality corpora it is available between the two languages, the better the translation quality it is expected to be. This is the reason why low-resource languages that tend to have less corpora available yield worse translation models. Additionally, the more similar two languages are the better the translation will be. It is not the same to translate from Spanish to Portuguese, two languages that are closely related as they follow similar linguistic patterns, as it is to translate from English to Chinese, that share little to no similarity in lexical or grammatical patterns. Several approaches have been proposed in an attempt to address the two aforementioned problems, among which the translation via triangulation framework is the most prominent(cohn2007machine; gollins2001improving). In this framework, a translation is decomposed into multiple sub-translations in order to maximize corpora quality/quantity in each of the sub-translations in order to improve the performance of the final translation. As an example, instead of translating from Portuguese to Catalan, which might have reduced corpora available, a translation is first done from Portuguese to Spanish and then from Spanish to Catalan, improving the final translation performance given that both language pairs used (Portuguese-Spanish and Spanish-Catalan) have a considerable larger amount of parallel corpora available. This framework, however, has its own drawbacks, as a higher of sub-translations means a higher computational cost and a more prominent cumulative error (introduced at each sub-translation level). Multilingual machine translation, derived from multi-task training techniques, is a more recent framework that intends to address the corpora availability problem. In this case the task of machine translation is no longer considered as a bilingual task, but as a multilingual task where multiple source and target languages can be simultaneously considered (luong2015effective). The objective of multilingual machine translation is to take advantage of knowledge in language pairs with large corpora availability and transfer it to lower resourced pairs by training them as part of the same model. For example, a single model can be trained to translate from Spanish to English and Catalan to English, with the expectancy that the performance of Catalan-English translations will get improved given that it has been trained together with a language with richer resources like Spanish. Examples of multilingual machine translation models include those based on strategies that use a single encoder for all languages but multiple decoders (dong2015multi), or strategies that treat all languages as part of a single unified encoder-decoder structure (ha2016toward). Even if existing multilingual machine translation strategies achieve language transfer to a degree, this transference only takes place when using specific language sets. Furthermore, these strategies ignore possible negative side-effects of including languages that are considerably different into a single model, i.e., training languages like Catalan and Spanish might be beneficial for performance, however, including a distant language like Chinese might decrease the overall performance of the same model. As a result, a state-of-the-art model such as the one described by ha2016toward that includes all languages as part of a unified encoder-decoder structure would be sub-optimal when including language groups with strong differences. obj2 observed a similar behavior in the area of cross-lingual word embedding and concluded that putting all languages into a single space could act in detriment of the general model if it is not done in an organized fashion. Inspired by the idea of building a single model that can translate from multiple to multiple languages ha2016toward and the need of organization of languages when building multilingual strategies obj2, we propose a Hierarchical Framework for Neural Machine Translation (HNMT). HNMT is a multilingual machine translation encoder-decoder framework that explicitly considers the inherent hierarchical structure in languages. For doing so, HNMT exploits a typological language family tree, which is a hierarchical representation of languages organized by their linguistic similarity, in terms of grammar, vocabulary, and syntax, to name a few. In other words, HNMT follows this natural connection among languages to encode and decode word sequences, in our case sentences. The hierarchical nature of languages allows HNMT to only combine knowledge across languages with similar nature, while avoiding any negative knowledge transfer across distant languages. The main contributions of this work include: - A novel hierarchical encoder-decoder framework that can be applied to any of the popular state-of-the-at machine translation strategies to improve translation performance for low-resource languages. - A comprehensive evaluation over 41 languages and 758 tasks to examine the extent to which language transfer is achieved. - An analysis of the implications emerging from using the proposed framework for machine translation of low-resource languages. 2 Related Work Machine translation techniques have been built using a variety of strategies, including rule-based systems(forcada2011apertium), statistical machine translation (koehn2007moses), or neural machine translation strategies (sutskever2014sequence). In this work, we dedicate research efforts to neural machine translation strategies (NMT). More specifically, we focus on the enhancement of encoder-decoder strategies from a multilingual perspective. For this reason, we describe below related literature in the area of NMT and multilingual approaches for NMT. Encoder-decoders strategies for NMT. Encoder-decoder strategies were first proposed by sutskever2014sequence as a solution for the inability of traditional neural networks for learning sequence-to-sequence mappings. This strategy was soon found lacking when translating long sentences given its need to compress all the sentence information into a low dimensional vector (cho2014learning). Several researchers tried to address this problem by allowing the decoder to have access to a larger amount of information, such as the previously generated word and the encoded sentence at any time step (cho2014learning) or to the whole set of hidden states produced by the decoder via an attention mechanism (bahdanau2014neural). In order to obtain further training speed and translation quality, approaches presented later on tried remove the recurrent layers of the models, known to hinder parallelization of models. With this purpose in mind, gehring2017convolutional proposed a model based on Convolutional Neural Networks, whilevaswani2017attention focused on just using layers purely based on attention. Multilingual NMT. Multilingual NMT strategies can be categorized by the degree to which they can share part of the architecture across different languages. dong2015multi use a single encoder regardless of the language and rely on separate decoders for translation. luong2015effective introduce a strategy that uses one single encoder and decoder per language among all translation pairs. firat2016multi maintain the different encoders and decoders but share the attention mechanism across all translation pairs. ha2016toward propose to use one universal encoder and decoder that can handle any source and target language. This is achieved by providing the model with information of the language as an embedded parameter. Even if existing Multilingual NMT models can obtain varied ranges of transfer learning across languages, none of the strategies we discussed takes advantage of the inherent hierarchical structure of the languages, that can beneficial to generate a more reliable language transfer among typologically similar languages, avoiding hindering the performance across distant languages. 3 Method In this section, we describe the proposed framework for hierarchical multilingual machine translation, i.e., HNMT. We first present a general sequence-to-sequence architecture used for neural machine translation, which we illustrate in Figure 1. Then we explain how this general structure can be extended for multilingual machine translation. Lastly, we describe our proposed hierarchical framework illustrated in Figure 2. 3.1 Neural Machine Translation State-of-the-art neural machine translation takes advantage of sequence-to-sequence models for translating a sequence (usually a sentence) in the source language to a sequence in the target language . For doing so, the model is generally separated into an encoder module () capable for encoding into a vector representation of size and a decoder () that aims at generating from . Both the encoder and the decoder modules contain an equal amount of repeated layers that are used in a sequential way, as illustrated in Figure 1. Starting from an input representation each encoding layer , is responsible for taking the representation and generating , until it produces . Once is generated each decoder layer , will take and generate until is generated. Following our naming convention: , , and . If the model consists of 3 layers (), the process of translating to requires steps: |(1)| Architectures for building each of the layers and are manifold in the literature, being recurrent neural networkslstm, convolutional neural networks gehring2017convolutional, and transformers vaswani2017attention the most widely accepted approaches. As previously stated, our proposed strategy is designed so that it can be applied to all of these architectures. However, for simplicity, we only showcase and discuss the practical application of HNMT on a single architecture (see Section 4 ). We use a Long Short Term Memory (LSTM) recurrent neural networklstm given that it is an architecture with well-studied benefits and limitations. This enables us to isolate any phenomena introduced by this architecture from our framework so that we really focus our analysis on the advantages and disadvantages of our framework on its own. 3.2 Multilingual Machine Translation In traditional (bilingual) neural machine translation, encoder and decoder modules are language specific, as captured by the subscripts in and . This means that an encoder for English () can be neither substituted by an encoder for Spanish () nor used to encode any that is not in English. Additionally, there is no guarantee that the representation is equivalent in any translation task, i.e., the representation that generates after being trained for English-Spanish translation is different from the representation generated by for English-Portuguese. Even inverse translation tasks, e.g., English-Spanish and Spanish-English, are considered to be separate tasks as there is no knowledge sharing across the two translations, resulting in different performance for each of the two. This poses a strong limitation to any language transfer in the translation task as every encoder/decoder is not only language specific but also task specific. The goal of multilingual machine translation is indeed to address this limitation by generating models that can transfer language knowledge across tasks. This is achieved by frameworks from the multi-task learning area, such as jointly training two models for several tasks sharing part of the model weights firat2016multi; ha2016toward. For example, for generating a model that can translate from both Spanish and Portuguese to English the model would be trained using pairs from both tasks, separate Spanish and Portuguese encoders but a single English decoder. This training strategy enables training the English decoder using data from both tasks (Spanish-English and Portuguese-English), benefiting from a larger aligned corpora and therefore achieving better decoding and translation performance. As described in Section 2, different strategies have been proposed in literature for multilingual machine translation. However, to the best of our knowledge, all of them consider the encoders and decoders as a atomic unit that cannot be separated any further, just differentiating from each other by how many full decoders or encoders the model uses for multilingual translation purposes, i.e., one-to-many, many-to-one, or one-to-one firat2016multi; ha2016toward. One of the limitations of these models is that they consider all languages to be of same nature, meaning that all languages are combined into a single encoder/decoder without any organization, ignoring the fact that some languages might indeed benefit each other while others would hinder the final performance of the translation task. This has demonstrated to be the case in related areas such as cross-lingual word embedding generation obj2. Instead, HNMT is capable of incorporating further divisions inside the encoder/decoder allowing a more fine grained possibilities for determining which languages should share weights or not as we describe in the following section. 3.3 Hierarchical Multilingual Machine Translation For HNMT, we define the multilingual machine translation task as a one-to-one task, meaning that HNMT only contains a single encoder and a single decoder. However, each layer or is shared or not across languages depending on their similarity. This enables similar languages to share a larger amount of layers, fostering further language transfer among them, while different language share less layers, avoiding hindering the model’s overall performance by forcefully combining language that are too distant. To delineate this inter-connectivity across languages HNMT takes into account the hierarchical nature of languages by taking advantage of a typological language tree. As illustrated in Figure 2 each family in the language tree corresponds to a layer of the encoder and the decoder. Each language always has a unique, non-shared-layer, which correspond to the very first layer of the encoder and very last layer in the decoder. This is due to the fact that we consider that each language to be different from any other even if it is to a small degree. This layer enables the model to capture these language-specific characteristics in both the encoder and the decoder. Additionally, HNMT also incorporates one layer that is shared across all languages. This layer is located directly before and after the vector representation of the sentence, and its purpose is to unify how the model generates this vector regardless of the language used. The remaining intermediate layers are directly determined by the language tree. For illustration purposes consider the translation task from Spanish to English versus the same task but from Spanish to Finnish. Based on the structure described in Figure 2, Spanish to English translating requires the following 8 steps: |(2)| where and refer to the encoder and decoder layers of language respectively. Spanish to Finnish translation, on the other hand, requires the following steps: |(3)| It is important to note that, as reflected in Figure 2, the language tree is not an equally balanced tree, meaning that the number of families from any language to the root node is different. In the example, the number of families from Spanish to the root is two, while the number of families from Finnish to the root is just one. This characteristic of the tree directly conflicts with the requirement of most existing sequence-to-sequence models to contain a same amount of layers in the encoder and the decoder. Additionally, some language might contain more families than layers are used in the model, i.e., if the layer number is chosen to be the families of English, German, Spanish, and Portuguese would not fit into the model. In order to address both of these concerns, we conduct a two-step preprocessing of the tree. First, we limit the tree to have layers, pruning any family that does not meet this constraint. Thereafter, we duplicate any leaf node that is not in the layer of the tree, e.g., in the sample tree in Figure 2, the Uralic family is duplicated to adhere to this constraint. 3.4 Training HNMT for Sentence Translation HNMT takes advantage of stochastic gradient descend for learning the weights of its model. Different from a traditional machine translation strategy, HNMT utilizes multiple datasets with different languages for training. Training is conducted in a round-robin fashion with respect to the datasets, i.e., one epoch of each dataset is trained in a sequential manner. Each dataset epoch is divided into batches of sentence pairs , for which the loss function is computed, backtracked and parameters tuned. We use cross entropy as loss function and Adaptive Movement Estimation(kingma2014adam) as optimizer with a learning rate of . The training will continue until no improvement is found on the training set for the average loss across all datasets in the last 10 epochs. In order to avoid overfitting, the model selected for testing is the model that achieves best performance in a separated validation set. Refer to Section 4.2 for further details on how we split the datasets and tune hyper-parameters. 4 Evaluation Framework In this section, we describe the evaluation framework used for examining the performance of HNMT and showcasing its advantages with respect to existing baselines. 4.1 Data We use the GlobalVoices parallel corpora globalvoices for training and evaluation purposes. This dataset is comprised of bilingual corpora for most combinations across 41 languages, totaling 758 different tasks, i.e., pairs of languages for translation. Each task contains a varying amount of parallel sentence pairs that go from less than 10k sentences (in the case of Catalan-English) to more than half a million (in the case of Spanish-English). The strong variation of corpora available for each task mimics a real world scenario where few languages are very rich in resources while many barely have resources associated with them, making this dataset ideal for our experiments. In order to input words to a neural machine translation model they first need to be converted into a numerical vector representation. For doing so, we take advantage of the cross-lingual word embeddings generated by obj2, tailored to low-resource scenarios, a case we consider of specific interest in our experiments. Finally, we use the language tree described in ethnologue on our experiments. This tree can sometimes be overly detailed, containing too many names describing nearly the same family of languages. For this reason, we prune the original tree to remove family names that can be treated as redundant for translation purposes. For example, having both Central Iberian and Castilian as family for the Spanish language is redundant. For pruning, we define the following criteria: Any family that contains exactly the same amount of languages as its parent is removed. 4.2 Validation and Hyper-parameter Tuning Each of the 758 task specific datasets considered in this study is randomly separated into 3 splits using 70%, 10%, and 20% of the sentence pairs for training, validation, and testing, respectively. The training set used for learning the weights of the model. The validation portion is used for selecting the best model among the ones generated during training. Finally, the testing set is only used for measuring the performance of the final model. Disjoint from these 3 sets, we held-out a development set of 20k Spanish-English sentence pairs. This development set is only used for verifying the correctness of the implementation and tuning hyper-parameters. No sentence pair in this held-out set is ever included in any of the train, validation, or testing sets. Hyper-parameters where manually selected, meaning that no exhaustive/automatic hyper-parameter tuning strategy was applied. The final hyper-parameters used in the experiments are: , , , , Layer-typeLSTM. It is true that the number of weights selected for our strategy is comparably smaller than what most state-of-the-art strategies currently usejohnson2017google. In fact, for HNMT we use and , whereas for current strategies and are customary. This was a compromise we had to take in order to balance for the large number of tasks we consider in the study, i.e., 758, compared with the less than a dozen tasks most current studies consider artetxe2017unsupervised; johnson2017google. 4.3 Baselines To contextualize the performance of HNMT, we compare its performance to that obtained by four baselines: a traditional bilingual baseline (Many-to-many) and three multilingual baselines (One-to-many, Many-to-one, One-to-one). - Many-to-many. This model resembles the traditional bilingual machine translation strategy where each task has it own encoder a decoders. - One-to-many. In this multilingual machine translation model one encoder is used regardless of the language and a different decoder for each language. - Many-to-one. Opposite to the previous model this one uses a single decoder for all he language but multiple encoders, one per language. - One-to-one. This is a universal machine translation model that utilizes just one encoder and one decoder for all the languages considered. Models that have a single decoder require some explicit information of the output language in order to enable the model to know the language in which it needs to generate the output sentence. For these models, we prepend the input of the model with a special token representing the language that needs to be generated similar to what is done by johnson2017google. It is also important to note that unlike aforementioned models HNMT does not require this token to operate, given that the last layer of the decoder is specific to the target language. 4.4 Metric For measuring the performance of each task we take advantage of a traditional metric to the machine translation area: Bilingual Evaluation Under Study (BLEU) papineni2002bleu. 5 Results and Discussion In order to fully analyze the performance of HNMT, it is important to first understand what traditional (bilingual) machine translation strategies can achieve. Therefore, we start by analyzing the performance of the many-to-many model from different perspectives. For doing so, we train and test this model for each of the tasks defined by a language pair in the GlobalVoices dataset. In Figure 3, we illustrate the performance of each task organized by the number of sentences available for the task. Emerging from the figure is a pattern that is inherent to the machine translation area: the more sentence pairs available, the better the performance of the model. This leads to an uneven scenario, one where strategies are better performing, i.e., are more effective, for resource-rich languages than for low-resource ones. Another issue affecting machine translation related to the direct connection that exists between performance and language similarity. While this is a fact that has been pointed out by several researchers cohn2007machine; gollins2001improving, to the best of our knowledge, this has never been thoroughly studied. For demonstrating how this dependency behaves in our experiments, we define the similarity between two language as the number of parent family nodes they share. As an example, if we refer back to Figure 2, the similarity between English and German is 2 as they both have the Germanic and Indo-European families as parents, while the similarity between Finnish and English is 0 as they do not have any common parent family. We depict in Figure 4 the performance of each of the tasks grouped by the similarity between the source and the target languages. From the figure, it is evident that the results follow a pattern where the more similar any two languages are, the better the quality of machine translation achieved for them. These two patterns demonstrate (1) a real need in the area of machine translation for designing transfer learning strategy that can improve the performance of machine translation for low-resource languages, and (2) a possibility to achieve valuable language transfer by using a proper organization of the languages, that makes is possible to take advantage of synergies among similar languages. These two patterns, further validate the premises that leaded us to design HNMT. In Table 1 we present the results of 5 different machine translation models for each of the tasks in GlobalVoices dataset, grouped by source language. We include the traditional machine translation model (many-to-many), 3 frameworks for building multilingual machine translation that are representative of the current state-of-the-art, and our HNMT framework. As shown in the table, our proposed strategy yields an average gain of 1.07 BLEU points over the traditional many-to-many strategy. This difference is statistically significant under paired T-test with a confidence interval of. Largest improvements are found in languages such as Catalan, Portuguese, Italian, or Spanish, which we find not to be a coincidence but the result of HNMT correctly integrating languages that share similarities. The lowest improvement is obtained for Oriya, an Indo-Aryan language that shares little similarity with respect to any of the remaining languages in the dataset. Among multilingual machine translation models (one-to-one, one-to-many, many-to-one), only the one-to-many model achieves an improvement over the traditional bilingual baseline. This is not a surprising result as, even if multilingual machine translation models have shown to improve over traditional machine translation with specific language combinations, they are known to under-perform when simultaneously dealing with either too many or too different languages johnson2017google. Table 1 captures average BLEU scores over all tasks that use the specified language as source. While average allow us to assess and compare performance across frameworks, it does not shine a light on translation pairs that greatly deviate from the average. To showcase the varied degrees of BLEU scores obtained by HNMT for each of the 758 translation tasks, we included an histogram in Figure 5. It can be appreciated in the figure that for most of the translation tasks performance ranges between 0 and 10 BLEU points. However, there are some cases for which BLUE is as high as 30-40. Not surprisingly, these cases align with popular, resource-rich languages like Spanish-English and Portuguese-English. At the opposite extreme, we see BLEU as low as 0.27. Once again, this is anticipated, as these low scores are the result of translation to/from low-resource (and often less recognized) languages, like Catalan to Oriya. In order to gather further insights on the translation capabilities of HNMT and to better visualize in which cases does HNMT achieve performance improvements with respect to other baselines, we conduct further analysis using language similarity and corpora availability lenses. We first explore model performance when corpora of different sizes is used for training purposes. To do so, we grouped each of the language pair tasks into seven different groups based on the amount of parallel sentence available. As depicted in Figure 6, corpora availability is a determinant factor for translation. The pattern we devised in Figure 3 is once again visible in Figure 6, the more sentences available for a task the better is its performance. However, differences with respect to the baseline are what make a model stand out in this case. Excluding the cases with high amount of corpora, where improvement is hardly possible from a language transfer perspective, we see that HNMT is the model that achieves the most improvement with respect to the bilingual baseline, followed by the one-to-many model. This behavior denotes that HNMT is indeed capable of improving the performance of machine translation in cases where resources are not abundant. We are also interested in exploring the effect language similarity has on translation, which is why we examine model performance for languages pairs with different degrees of similarity between them. Results from this experiment are summarized in Figure 7. In general, we observe similar patterns to the ones we previously described: BLEU scores computed for machine translation task are higher for languages that are similar. This pattern occurs regardless of the language, however, it is considerably more pronounced in the case of HNMT, leading to a higher improvement with respect to the bilingual baseline the more similar the languages are. We also notice from Figure 7 that none of the other multilingual machine translation models takes advantage of this behavior, maintaining a similar difference with respect to the baseline regardless of the degree of similarity between the languages in pairs considered from analysis. These results serve as indication that the hierarchical organization used in HNMT is indeed useful for explicitly taking advantage of similarities across languages, validating our premises for the design of HNMT. 6 Conclusion and Future Work In this paper, we presented HNMT, an hierarchical framework for machine translation that can be applied to any multilingual neural machine translation strategy, for achieving a higher degree of transfer learning across languages. We conducted several experiments using 758 language pairs including languages with varied resource availability and similarity. Our empirical analysis reveals that highest improvements take place when the languages are typologically related and aligned corpora is not abundant, achieving an improvement of about 5 BLEU points in specific cases. These results validate our premise that machine translation for low-resource languages can be enhanced by means of language transfer if an appropriate organization of languages is used, such as the one we utilize as part of HNMT. As a natural part of its encoding-decoding process for translation, HNMT generates a language-agnostic vector representation of sentences. While we did not evaluated the quality of this by-product of our work, given that it was out of scope, exploratory examinations lead us to believe that these language-agnostic representations could be leveraged for supporting a multilingual applications in related text processing areas. We are aware of some limitations of this work. First, even if the strategy is shown to improve low-resource scenarios where the source and target language are typologically related, this effect is not as prominent when the languages are different from each other. Consequently, the applicability of HNMT for isolated languages such as Basque is limited. Second, given the high amount of tasks and languages considered, the size of the machine translation models we used for experimentation is small compared to current state-of-the-art systems. For example, we set and , when current strategies use and . In the future, we plan on leveraging other types of signals, such as the use of sub-word embeddings, for enabling further language transfer. Additionally, we will extend our empirical analysis to explore the performance effect of using larger and more varied machine translation models, such as Convolutional Neural Networks or Transformers.
https://deepai.org/publication/a-framework-for-hierarchical-multilingual-machine-translation
Topics: Introduction, Advanced search techniques in AI, Knowledge based system design, Advanced plan generating systems, Bayesian network and probabilistic reasoning, Learning in neural belief networks, Practical natural language processing, Computer vision, Introduction to Robotics. - CSE 5402 - Fuzzy Systems Topics: Basic Concepts of Fuzzy set theory; Fuzzy numbers; Aggregation operations of Fuzzy sets; The theory of approximate reasoning; Introduction to Fuzzy logic control; Fuzzy System Models and Developments; Fuzzy logic controllers; Defuzzification methods; Linguistic descriptions and their analytical forms; The flexible structure of fuzzy systems; Practical Aspects of Neural Networks. - CSE 5403 - Machine Learning Topics: Definition of learning systems, Goals and applications of machine learning, Inductive Classification, Decision Tree Learning, Ensemble Learning, Experimental Evaluation of Learning Algorithms, Computational Learning Theory, Rule Learning: Propositional and First-Order, Artificial Neural Networks, Support Vector Machines, Bayesian Learning, Instance-Based Learning, Text Classification, Clustering and Unsupervised Learning, Language Learning - CSE 5404 - Advanced Pattern Recognition Topics: Introduction to formal languages, String languages for pattern description, Higher dimensional pattern grammars, Syntax analysis as a recognition procedure, Stochastic languages, Error-correcting parsing for string languages, Error-correcting tree automata, Cluster analysis for syntactic patterns, Grammatical inference for syntactic pattern recognition, Application shape analysis of wave forms and contours, Syntactic approach to texture analysis. - CSE 5405 - Speech Recognition Topics: Introduction, Speech signal: production, perception and characterization, Signal processing and analysis; Pattern comparison techniques: distortion measures, spectral-distortion measures, time alignment and normalization; Recognition system design and implementation: source-coding, template training, performance analysis; Connected word models: two level DP, level building algorithm, one-pass algorithm; Continuous speech recognition: sub word units, statistical modeling, context-dependent units; Task oriented models. - CSE 5406 - Machine Translation Topics: Theoretical problems: Definition, Context dependency, interpretation and translation; Engineering problems of machine translation: Maintainability, tunability, modularity, and efficiency; Linguistics-based MT: Compositionality and isomorphism, Declarative frameworks, Constraint-based formalisms; Knowledge-based MT: Translation and understanding, Design of interlinguas, The conceptual lexicon; Statistics-based MT: E-M algorithms, Alignment of bilingual corpora, Translation templates; Example-based MT: Similarity measures, Levels of comparison; Treatment of context dependency: Knowledge-based transfer, Sublanguage-based MT, Translation units. - CSE 5407 - Knowledge Representation and Reasoning Topics: Knowledge representation, uses in computers; logic-based languages for KR; automated reasoning techniques and systems; applications of KR to ontologies and semantic web. - CSE 5408 - Advanced Data Mining Topics: Introduction; Data warehousing and OLAP technology for data mining; Data preprocessing; Data mining primitives, languages and systems; Descriptive data mining: characterization and comparison; Association analysis; Classification and prediction; Cluster analysis; Mining complex types of data; Applications and trends in data mining. - CSE 5451 - Evolutionary Algorithms Topics: Introduction to evolutionary algorithm; Selection: rank-based, roulette wheel, stochastic, local, truncation and tournament; Recombination: discrete, real valued and binary valued; Mutation: real valued and binary valued; Reinsertion: global and local; Population models: global- worker/farmer, local diffusion, and regional migration; Co-evolution: cooperative and competitive; Learnable evolution model; Fast evolutionary programming; Application of evolutionary algorithms to: system design, telecommunication, robotics and other industrial areas. - CSE 5452 - Neural Networks Topics: Fundamentals of Neural Networks; Back propagation and related training algorithms; Hebbian learning; Cohonen-Grossberg learning; The BAM and the Hopfield Memory; Simulated Annealing; Different types of Neural Networks: Counter propagation, Probabilistic, Radial Basis Function, Generalized Regression, etc; Adaptive Resonance Theory; Dynamic Systems and neural Control; The Boltzmann Machine; Self-organizing Maps; Spatiotemporal Pattern Classification, The Neocognition; Practical Aspects of Neural Networks.
http://new.cseku.ac.bd/graduate/term-wise/course/13
Natural language processing is a transformative technology and has generated a lot of buzz in recent years for its large-scale impact. But most of the research and models built focus on mechanisms that work for the English language. Even though the models are built for other languages, they have been mostly around popular languages. There is around 7000 languages spoken in this world, the Asian continent having the highest percentage in terms of number of languages spoken. If we don’t respond to the immense array of languages that exist, we are missing out much of the world from the benefits of technological advancements. There is a need to develop speech recognition models for other languages in order to make the technology more inclusive. Free Course on Responsible AI. Register here>> Difficult to build Although researchers and tech companies have realized that introducing NLP in other languages would be very useful from a business and societal perspective, it is quite difficult to build the models in other languages. because the availability of the correct and sufficient dataset is a huge problem. We need a large dataset to train and test the algorithm while building an NLP model. Although large populations may speak a particular language, obtaining such datasets can still be difficult. If a small dataset is available, we would need to have separate models to build the model. Language data also needs to be cleaned. Many languages have symbols and other characters that all types of computer systems may not recognize without appropriate modification. Adapting it to such systems can be time consuming and costly. If a company develops a model for other languages, it must open it up because it is still an emerging field and others can effectively learn from and be inspired by it. Progress We have made progress in recent years to build models in a wide range of languages. - In 2020, Meta introduced the M2M-100, a multilingual machine translation (MMT) model that translates between any pair of 100 languages without relying on English data. He stated that M2M-100 is trained on a total of 2,200 language instructions. The goal of building such a model is to improve the quality of translations around the world, especially those that speak low-resource languages, Meta said. - Presentation of the David R. Cheriton School of Computer Science at the University of Waterloo AfriBERTa, which uses deep learning techniques to achieve cutting-edge results for low-resource languages. He said AfriBERTa works specifically with 11 African languages, such as Amharic, Hausa and Swahili, spoken cumulatively by more than 400 million people. The mechanism achieves an output comparable to the best existing models despite learning a single gigabyte of text, according to the university. - In September, IIT Bombay spear Udaan project which helps translate textbooks and other engineering study materials and other streams from English to Hindi and other Indian languages. It is a translation ecosystem based on donations and artificial intelligence. How will this help? Natural language processing finds its use in a wide range of areas, such as summarizing, answering questions, sentence similarity, translation, token classification and many more. If it can penetrate less popular languages, it will be immensely beneficial for: - Understand and analyze emotions on various social media platforms and ecommerce website comments where a large portion of people speak in their native language and not in English. It can be very beneficial for businesses for feedback and improvement. - Better customer service and engagement, as customers mostly like to talk to chatbots or virtual assistants in their native language. - Extending to various categories will improve the results and accuracy of the technology. - The right content is available to users in their native language – based on their choices and past habits. - The penetration of technology into non-popular languages will benefit society. We need to make sure that the benefits of technology are available to everyone for society to move forward. A good start for this will be the penetration of new era technologies beyond borders. Subscribe to our newsletter Receive the latest updates and relevant offers by sharing your email.
https://libhitech.com/overcome-the-language-barrier-in-nlp/
@ Room 3BC Efficient algorithms are indispensable in large scale ML applications. In recent years, the ML community has not just been a large consumer of what the optimization literature had to offer, but it has also been acting as a driving force in the development of new algorithmic tools. The challenges of massive data and efficient implementations have led to many cutting-edge advances in optimization. The goal of this workshop is to bring practitioners and theoreticians together and to stimulate the exchange between experts from industry and academia. For practitioners, the workshop should give an idea of exciting new developments which they can *use* in their work. For theorists, it should provide a forum to frame the practicality of assumptions and recent work, as well as potentially interesting open questions. Format: 4-5 invited talks, as well as a panel discussion with the invited speakers - 13:30-14:05 Ce Zhang - 14:05-14:40 Miltos Allamanis - 14:45-15:20 Olivier Teytaud - 15:20-15:55 Celestine Dünner - 16:00-16:30 Panel Discussion Speaker: Ce Zhang (ETH Zurich) “Can machine learning help to improve this application?” After this question pops up in the mind of a user -- a biologist, an astrophysicist, or a social scientist -- how long would it take for her to get an answer? Our research studies how to answer this question as rapidly as possible, by accelerating the whole machine learning process. Making a deep learning system to train faster is indispensable for this purpose, but there is far more to it than that. Our research focuses on: (1) applications, (2) systems, and (3) abstractions. For applications, I will talk about machine learning applications that we enabled by supporting a range of users, none of whom had backgrounds in computer science. For systems, we focus on understanding the system trade-off of distributed training and inference for a diverse set of machine learning models, and how to co-design machine learning algorithms and modern hardware so as to unleash the full potential of both. I will talk in detail about our recent results and their application to FPGA-based acceleration. For abstractions, I will introduce ease.ml, a high-level declarative system for machine learning, which enables the coding of many of the applications we built with just four lines of code. Speaker: Miltos Allamanis (Microsoft Research) - Presentation Like writing and speaking, software development is an act of human communication. Humans need to understand, maintain and extend code. To achieve this efficiently, developers write code using implicit and explicit syntactic and semantic conventions that aim to ease human communication. The existence of these conventions has raised the exciting opportunity of creating machine learning models that learn from existing code and are embedded within software engineering tools. This nascent area of "big code" or "code naturalness" lies in the intersection of the software engineering, programming languages and machine learning communities. The core challenge rests on finding methods that learn from highly structured and discrete objects with formal constraints and semantics. In this talk, I will give a brief overview of the research area, highlight a few interesting findings and discuss some of the emerging challenges for machine learning. Speaker: Olivier Teytaud (Google Brain) We introduce an exact distributed algorithm to train Random Forest models as well as other decision forest models without relying on approximating best split search. We introduce the proposed algorithm, and compare it, for various complexity measures (time, ram, disk, and network complexity analysis), to related approaches. We report its running performances on artificial and real-world datasets up to 17 billions examples. This figure is several orders of magnitude larger than datasets tackled in the existing literature. Finally, we show empirically that Random Forest benefits from being trained on more data, even in the case of already gigantic datasets. decision trees. Sprint is particularly suitable for the distributed setting, but we show that Sliq becomes better in the balanced case and/or when working with randomly drawn subsets of features; and we derive a rule for automatically switching between both methods. Given a dataset with 17.3B examples with 71 features, our implementation trains a tree in 22h. Joint work with Mathieu Guillame-Bert. Speaker: Celestine Dünner (IBM Reseach) - Presentation This talk focuses on techniques to accelerate the distributed training of large-scale machine learning models in heterogeneous compute environments. Such techniques are particularly important for applications where the training time is a severe bottleneck. They can enable more agile development and thus allow to better explore the parameter and model space which in turn yields to higher quality predictions. In this talk I will give insight into recent advances in distributed optimization and primal-dual optimization methods. I will focus on how such methods can be combined with novel techniques to accelerate machine learning algorithms on heterogeneous compute resources. Putting it all together, I will demonstrate the training of a linear classifier on the criteo click prediction dataset, consisting of 1 billion training examples, in a few seconds.
https://www.appliedmldays.org/conf2018/workshop_sessions/advances-in-ml-theory-meets-practice
Research has shown promise in the design of large scale common sense probabilistic models to infer human state from environmental sensor data. These models have made use of mined and preexisting common sense data and traditional probabilistic machine learning techniques to improve recognition of the state of everyday human life. In this paper, we demonstrate effective techniques for structure learning on graphical models designed for this domain, improving the SRCS system of (Pentney et al. 2006) by learning additional dependencies between variables. Because the models used for common sense reasoning typically involve a large number of variables, issues of scale arise in searching for additional dependencies. We describe how we use data mining techniques to address this problem and show experimentally that these techniques improve the accuracy of state prediction. We present techniques to improve prediction the unlabeled as well as the labeled variable case. At a high level, we demons... William Pentney, Matthai Philipose, Jeff A. Bilmes Real-time Traffic AAAI 2008 | Additional Dependencies | Common Sense | Intelligent Agents | Sense Probabilistic Models | claim paper Related Content » Learning Large Scale Common Sense Models of Everyday Life » Visual Tracking of High DOF Articulated Structures an Application to Human Hand Tracking » The Tradeoffs of Large Scale Learning » The Power of Amnesia » Effective music tagging through advanced statistical modeling » Learning to Read Between the Lines using Bayesian Logic Programs » Mining models of human activities from the web » Perceptual Scale Space and its Applications » Systematically Grounding Language through Vision in a Deep Recurrent Neural Network more » Post Info More Details (n/a) Added 02 Oct 2010 Updated 02 Oct 2010 Type Conference Year 2008 Where AAAI Authors William Pentney, Matthai Philipose, Jeff A. Bilmes Comments (0) Researcher Info Intelligent Agents Study Group Computer Vision Join Our Newsletter receive notifications of our new tools Explore & Download Proceedings Preprints Top 5 Ranked Papers Publications Books Software Tutorials Presentations Lectures Notes Datasets Explore Subject Areas Life Sciences Algorithms Applied Computing Artificial Intelligence Augmented Reality Automated Reasoning Bioinformatics Biomedical Imaging Biomedical Simulation Biometrics Business Chemistry Cognitive Science Combinatorics Communications Computational Biology Computational Geometry Computational Linguistics Computer Animation Computer Architecture Computer Graphics Computer Networks Computer Science Computer Vision Control Systems Cryptology Data Mining Database Digital Library Discrete Geometry Distributed And Parallel Computing Document Analysis ECommerce Economy Education Electrical and Computer Engineering Electronic Publishing Embedded Systems Emerging Technology Finance Forensic Engineering Formal Methods FPGA Fuzzy Logic Game Theory GIS Graph Theory Hardware Healthcare Human Computer Interaction Image Analysis Image Processing Information Technology Intelligent Agents Internet Technology Knowledge Management Languages Latex Logical Reasoning Machine Learning Management Mathematics Medical Imaging Modeling and Simulation Multimedia Music Natural Language Processing Neural Networks Numerical Methods Operating System Operations Research Optimization Pattern Recognition Physics Programming Languages Remote Sensing Robotics Security Privacy Sensor Networks Signal Processing Social Networks Social Sciences Software Engineering Solid Modeling System Software Theoretical Computer Science User Interface VHDL Virtual Reality Virtualization Visual Languages Visualization VLSI Wireless Networks Productivity Tools International On-screen Keyboard Graphical Social Symbols OCR Text Recognition CSS3 Style Generator Web Page to PDF Web Page to Image PDF Split PDF Merge Latex Equation Editor Sci2ools Document Tools PDF to Text PDF to Postscript PDF to Thumbnails Excel to PDF Word to PDF Postscript to PDF PowerPoint to PDF Latex to Word Repair Corrupted PDF Image Tools JPG to PS JPG to PDF Extract Images from PDF Image Converter Sciweavers About Community Report Bug Request Feature Cookies Contact Copyright © 2009-2011 Sciweavers LLC. All rights reserved.
http://www.sciweavers.org/publications/structure-learning-large-scale-common-sense-statistical-models-human-state
- Strong math and statistics background, including statistical modeling, analysis of variance, vectors. - Experience in all stages in a data science project life cycle including designed & deployed Machine Learning models in production and Machine Learning techniques like Ranking, Classification, Clustering, Regression and Topic Modeling - Strong database knowledge and expertise in SQL. - Deep knowledge of machine learning, information retrieval, data mining, statistics, NLP or related field - Degree in Science, Technology, Engineering or Math, or similar technical expertise in Data Science, Analytics, Modelling, Monitoring or Business Intelligence - 6 to 8 years of associated educational experience for a bachelor's degree; or 4 to 6 years with a master's degree; - Familiarity with programming languages for data analysis, such as Python, R, tools for data science, frameworks and tools for data engineering. Position: Software Engineer Location: Austin, USA Division: MagRabbit USA Contact details - email Please send your CV to:
https://www.magrabbit.com/jobs/solutions-architecture-senior-principal-engineer/
National Technical University of Athens (NTUA) is looking for an Early-Stage Researcher (ESR) to work on the GECKO project under the HORIZON 2020 call. Smart technology is everywhere - in our homes, pockets, and networks. Is smarter use of energy essential for low-carbon energy systems of the future? Or is smartness just a buzzword for new gadgets that require ever-more energy and worsen the digital divide? Smart technology is a mass of hopes, fears and contradictions. This PhD offers a unique opportunity to tackle energy-related issues and uncover consumption patterns, supported by a major new EU-wide PhD training network. This interdisciplinary GECKO’ network connects leading social, computer and data scientists working on smart technology, artificial intelligence, human-computer interaction, energy, climate change, and responsible innovation. GECKO will target interpretable and explainable Artificial Intelligence (AI) and explore alternative methods to build machine learning (ML) models drawing on the latest developments in information and social sciences. An inter-disciplinary methodology will be adopted to tackle the most prominent application example driver, where ML and social science must be considered together : addressing urgent sustainability and energy efficiency needs, where a successful responsible AI technology must embed social science understanding of people’s actions and how they interact with technology. GECKO will train the next generation of research leaders in this exciting and emerging field at the intersection between smart technologies, energy, big data, algorithm design, and user behavior. The candidate will work on applying deep-learning techniques on energy signals to improve semantic description of energy signals and identify key consumption patterns. The 36-month program will allow the ESR to work within a multidisciplinary team, enhance his / her knowledge on deep learning techniques and help addressing urgent environmental needs. Exciting benefits are also part of the program, such as : Offer Requirements Skills / Qualifications The ideal candidate should : 1) Have a strong background in programming and machine learning (can be showcased through curriculum courses or certified courses on online platforms). 2) Have work experience with Signal Processing (ex. images, video) in a research environment. 3) Possess programming experience in languages such as Python, Matlab and C++ as well as Python-related deep learning frameworks (ex. Keras / Tensorflow, Pytorch). Knowledge of additional programming languages is a plus. 4) Have knowledge of the German or Swedish language (level C1 or higher), due to secondments in the aforementioned countries.
https://neuvoo.gr/view/?id=ef140bbf68f6
Transfer learning is an area of intense AI research — it focuses on storing knowledge gained while solving a problem and applying it to a related problem. But despite recent breakthroughs, it’s not yet well-understood what enables a successful transfer and which parts of algorithms are responsible for it. That’s why Google researchers sought to develop analysis techniques tailored to explainability challenges in transfer learning. In a new paper, they say their contributions help to solve a few of the mysteries around why machine learning models successfully — or unsuccessfully — transfer. During the first of several experiments in the course of the study, the researchers sourced images from a medical imaging data set of chest x-rays (CheXpert) and sketches, clip art, and paintings from the open source DomainNet corpus. They partitioned each image into equal-sized blocks and shuffled the blocks randomly, disrupting the images’ visual features, after which they compared the agreements and disagreements between models trained from pretraining versus from scratch. The researchers found the reuse of features — the individual measurable properties of a phenomenon being observed — is an important factor in successful transfers, but not the only one. Low-level statistics of the data that weren’t disturbed by things like shuffling the pixels also play a role. Moreover, any two instances of models trained from pretrained weights make similar mistakes, suggesting these models capture features in common. Working from this knowledge, the researchers attempted to pinpoint where feature reuse occurs within models. They observed that features become more specialized the denser the model becomes (in terms of layers) and that feature-reuse is more prevalent in layers closer to the input. (Deep learning models contain mathematical functions arranged in layers that transmit signals from input data.) They also find it’s possible to fine-tune pretrained models on a target task sooner than originally assumed without sacrificing accuracy. “Our observation of low-level data statistics improving training speed could lead to better network initialization methods,” the researchers wrote. “Using these findings to improve transfer learning is of interest for future work.” A better understanding of transfer learning could yield substantial algorithmic performance gains. Already, Google is using transfer learning in Google Translate so that insights gleaned through training on high-resource languages including French, German, and Spanish (which have billions of parallel examples) can be applied to the translation of low-resource languages like Yoruba, Sindhi, and Hawaiian (which have only tens of thousands of examples). Another Google team has applied transfer learning techniques to enable robot control algorithms to learn how to manipulate objects faster with less data.
https://itcareersholland.nl/nl/google-researchers-investigate-how-transfer-learning-works/
from close counterparts. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 16 (4). pp. 1-14. ISSN 2375-4699 Full text available as: Some natural languages belong to the same family or share similar syntactic and/or semantic regularities. This property persuades researchers to share computational models across languages and benefit from high-quality models to boost existing low-performance counterparts. In this article, we follow a similar idea, whereby we develop statistical and neural machine translation (MT) engines that are trained on one language pair but are used to translate another language. First we train a reliable model for a high resource language, and then we exploit cross-lingual similarities and adapt the model to work for a close language with almost zero resources. We chose Turkish (Tr) and Azeri or Azerbaijani (Az) as the proposed pair in our experiments. Azeri suffers from lack of resources as there is almost no bilingual corpus for this language. Via our techniques, we are able to train an engine for the Az→English (En) direction, which is able to outperform all other existing models.
https://doras.dcu.ie/23316/
Named entity recognition (NER) is a very important task in Natural Language Processing. In the NER task, the objective is to find and cluster named entities in text into any desired categories such as person names (PER), organizations (ORG), locations (LOC), time expressions, etc. NER is an important precursor to tasks like Machine Translation, Question Answering , Topic Modelling and Information Extraction among others. Various methods have been used in the past for NER including Hidden Markov models, Conditional Random fields, Feature engineering approaches using Support Vector Machines, Max Entropy classifiers for finally classifying outputs and more recently neural network based approaches. Development of an NER system for Indian languages is a comparatively difficult task. Hindi and many other Indian languages provide some inherent difficulties in many NLP related tasks. The structure of the languages contain many complexities like free-word ordering (which affect n-gram based approaches significantly), no capitalization information and its inflectional nature (affecting hand-engineered approaches significantly). Also, in Indian languages there are many word constructions that can be classified as Named Entities (Derivational/Inflectional constructions) etc and these constraints on these constructions vary from language to language hence carefully crafted rules need to be made for each language which is a very time consuming and expensive task. Another major problem in Indian languages is the fact that we have scarce availability of annotated data for indian languages. The task is hard for rule-based NLP tools, and the scarcity of labelled data renders many of the statistical approaches like Deep Learning unusable. This complexity in the task is a significant challenge to solve. Can we develop tools which can generalize to other languages(unlike rule based approaches) but still can perform well on this task? On the other hand, RNN’s and its variants have consistently performed better than other approaches on English NER and many other sequence labelling tasks. We believe RNN would be a very effective method compared to fixed-window approaches as the memory cell takes much larger parts of the sentence into context thus solving the problem of sentences being freely ordered to a large extent. We propose a method to be able to model the NER task using RNN based approaches using the unsupervised data available and achieve good improvements in accuracies over many other models without any hand-engineered features or any rule-based approach. We would learn word-vectors that capture a large number of precise semantic and syntactic word relationships from a large unlabelled corpus and use them to initialize RNNs thus allowing us to leverage the capabilities of RNNs on the currently available data. We believe to the best of our knowledge, that this is the first approach capable of using RNN for NER in Hindi data. We believe learning based approaches like these could generalize to other Indian languages without having to handcraft features or develop dependence on other NLP related tools. Our model uses no language specific features or gazetteers or dictionaries. We use a small amount of supervised training data along with some unannotated corpus for training word embeddings yet we achieve accuracies on par with the state of the art results on the CoNLL 2003 dataset for English and achieve 77.48% accuracy on ICON 2013 NLP tools corpus for Hindi language. Our paper is mainly divided into the following sections: In Section 1 we begin with an introduction to the task of NER and briefly describe our approach. In Section 2, we mention the issues with hindi NER and provide an overview of the past approaches to NER. In Section 3, we descibe our proposed RNN based approach to the task of NER and the creation of word embeddings for NER which are at the core of our model. In Section 4 We explain our experimental setup, describe the dataset for both Hindi and English and give results and observations of testing on both the datasets. In Section 5 We give our conclusions from the experiments and also describe methods to extend our approach to other languages. NER task has been extensively studied in the literature. Previous approaches in NER can be roughly classified into Rule based approaches and learning based approaches. Rule based approaches include the system developed by Ralph Grishman in 1995 which used a large dictionary of Named Entities [R. Grishman et al.1995]. Another model was built for NER using large lists of names of people, location etc. in 1996[Wakao et al.1996] . A huge disadvantage of these systems is that a huge list needed to be made and the output for any entity not seen before could not be determined. They lacked in discovering new named entities, not present in the dictionary available and also cases where the word appeared in the dictionary but was not a named entity. This is an even bigger problem for indian languages which would frequently be agglutinative in nature hence creation of dictionaries would be rendered impossible. People either used feature learning based approaches using Hand-crafted features like Capitalization etc. They gave these features to a Machine learning based classifier like Support Vector Machine (SVM)[Takeuchi et al.2002] , Naive Bayes (NB) or Maximum Entropy (ME) classifiers. Some posed this problem as a sequence labelling problem terming the context is very important in determining the entities. Then, the handcrafted series were used in sequences using Machine learning methods such as Hidden Markov Models (HMM)[Bikel et al.1997], Conditional Random Field (CRF) [Das et al.2013] and Decision Trees (DT)[Isozaki et al.2001]. Many attempts have been made to combine the above two approaches to achieve better performance. An example of this is [Srihari et al.2000] who use a combination of both handcrafted rules along with HMM and ME. More recent approaches for Indian language and Hindi NER are based on CRF’s and include [Das et al.2013] and [Sharnagat et al.2013]. The recent RNN based approaches for NER include ones by [Lample et al.2016]. Also, there are many approaches which combine NER with other tasks like [Collobert et al.2011] (POS Tagging and NER along with Chunking and SRL tasks) and [Luo et al.2015] (combining Entity Linking and NER) which have produced state-of-the-art results on English datasets. Owing to the recent success in deep learning frameworks, we sought to apply the techniques to Indian language data like Hindi. But, the main challenge in these approaches is to learn inspite of the scarcity of labelled data, one of the core problems of adapting deep-learning approaches to this domain. We propose to leverage the vast amount of unlabelled data available in this domain. The recurrent neural networks RNN trained generally have to learn the recurrent layer as well as the embedding layer for every word. The embedding layer usually takes a large amount of data to create good embeddings. We formulate a two stage methodology to utilize the unlabelled data: In the first stage we utilize unlabelled corpora. We learn Skip-gram [Mikolov et al.2013] based embeddings and GloVe [Pennington et al.2014] embeddings on those corpora. We use the Wikipedia corpus for Hindi as a source to train these models. By that, we get wordvectors which will be used in the second stage. In the second stage, as illustrated in Figure 1, we use the deep-learning based models. We initialize their embedding layers with the wordvectors for every word. Then, we train the network end-to-end on the labelled data. As various approaches have proved, a good initialization is crucial to learning good models and train faster [Sutskever et al.2013]. We apply this approach to use word-vectors to counter the scarcity of labelled data. The idea behind this is that the models would require much lesser data for convergence and would give much better results than when the embeddings are randomly initialized. To get both previous and subsequent context for making predictions we use Bi-Directional RNN [Schuster et al.1997]. We know that Vanilla RNN suffers from not being able to model long term dependencies [Bengio et al.1994] Hence we use the LSTM variant of the RNN [Hochreiter et al.1997] which helps the RNN model long dependencies better. Word2Vec based approaches use the idea that words which occur in similar context are similar. Thus, they can be clustered together. There are two models introduced by: [Mikolov et al.2013] CBOW and Skipgram. The latter is shown to perform better on English corpuses for a variety of tasks, hence is more generalizable. Thus, we use the skip-gram based approach. Most recent method for generating wordvectors was GloVe, which is similar in nature to that of Skipgram based model. It trains embeddings with local window context using co-occurrence matrices. The GloVe model is trained on the non-zero entries of a global co-occurrence matrix of all words in the corpus. GloVe is shown to be a very effective method, and is used widely thus is shown to be generalizable to multiple tasks in English. For English language, we use the pretrained word embeddings using the aforementioned approaches, since they are widely used and pretty effective. The links for downloading the vectors are provided111Links for download note:webpage not maintained by us https://github.com/3Top/word2vec-api . However, for Hindi language we train using above mentioned methods(Word2Vec and GloVe) and generate word vectors. We start with One hot encoding for the words and random initializations for their wordvectors and then train them to finally arrive at the word vectors. We use the Hindi text from LTRC IIIT Hyderabad Corpus for training. The data is 385 MB in size and the encoding used is the UTF-8 format (The unsupervised training corpus contains 27 million tokens and 500,000 distinct tokens). The training Hindi word embeddings were trained using a window of context size of 5. The trained model is then used to generate the embeddings for the words in the vocabulary. The data would be released along with the paper at our website along with the wordvectors and their training code222https://github.com/monikkinom/ner-lstm. For a comparative study of performance of these methods, we also compare between the Skip-gram based wordvectors and GloVe vectors as embeddings to evaluate their performance on Hindi language. The architecture of the neural networks is described in Figure 2. We trained deep neural networks consisting of either one or two recurrent layers since the labelled dataset was small. In the architecture, we have an embedding layer followed by one or two recurrent layers as specified in the experiments followed by the softmax layer. We experimented with three different kinds of recurrent layers: Vanilla RNN, LSTM and Bi-directional LSTM to test which one would be the most suitable for NER task. For the embedding layer, it is initialized with the concatenation of the wordvector and the one-hot vector indicating its POS Tag. The POS Tagging task is generally considered as a very useful feature for entity recognition, so it was a reliable feature. This hypothesis was validated when the inclusion of POS tags into the embedding improved the accuracy by 3-4%. This setup was trained end-to-end using Adam optimizer [Kingma et al.2015] and batch size of 128 using dropout layer with the dropout value of 0.5 after each of the recurrent layers. We have used dropout training [Srivastava et al.2014] to reduce overfitting in our models and help the model generalise well to the data. The key idea in dropouts is to randomly drop units with their connections from the neural network during training. We perform extensive experimentation to validate our methodology. We have described the datasets we use and the experimental setup in detail in this section. We then present our results and provide a set of observations made for those results. We test the effectiveness of our approach on ICON 2013 NLP tools contest dataset for Hindi language, along with cross-validating our methodology on the well-established CoNLL 2003 English named entity recognition dataset [ Sang et al.2003] . We used the ICON 2013 NLP tools contest dataset to evaluate our models on Hindi. The dataset contains words annotated with part-of-speech (POS) tags and corresponding named entity labels in Shakti Standard Form (SSF) format [Bharti et al.2009] . The dataset primarily contains 11 entity types: Organization (ORG), Person (PER), Location (LOC), Entertainment, Facilities, Artifact, Living things, Locomotives, Plants, Materials and Diseases. Rest of the corpus was tagged as non-entities (O). The dataset was randomly divided into three splits: Train, Development and Test in the ratios 70%, 17% and 13%. The training set consists of 3,199 sentences comprising 56,801 tokens, development set contains 707 sentences comprising 12,882 tokens and test set contains 571 sentences comprising of 10,396 tokens. We use the F1-measure to evaluate our performance against other approaches. We perform extensive experiments on the CoNLL 2003 dataset for Named Entity Recognition. The dataset is primarily a collection of Reuters newswire articles annotated for NER with four entity types: Person (PER), Location(LOC), Organization(ORG), Miscellaneous (MISC) along with non entity elements tagged as (O). The data is provided with a training set contains 15,000 sentences consisting of approximately 203,000 tokens, along with a development set containing 3466 sentences consisting of around 51,000 tokens and a test set containing 3684 sentences comprising of approximately 46,435 tokens. We use the standard evaluation scripts provided along with the dataset for assessing the performance of our methodology. The scripts use the F1-score to evaluate the performance of models. We use this architecture for the network because of the constraint on the dataset size caused by scarcity of labelled data. We used a NVIDIA 970 GTX GPU and a 4.00 GHz Intel i7-4790 processor with 64GB RAM to train our models. As the datasets in this domain expand, we would like to scale up our approach to bigger architectures. The results obtained on ICON 2013 NLP Tools dataset are summarized in Table 2. We cross-validated our approach with English language using the CoNLL 2003 dataset. The results are summarized in Table 1, We are able to achieve state-of-the-art accuracies without using additional information like Gazetteers, Chunks along with not using any hand-crafted features which are considered essential for NER task as chunking provides us data about the phrases and Gazetteers provide a list of words which have high likelihood of being a named entity. The neural networks which did not have wordvector based initializations could not perform well on the NER task as predicted. This can be attributed to the scarcity of the data available in the NER task. We also observe that networks consisting of one recurrent layer perform equally good or even better than networks having two recurrent layers. We believe this would be a validation to our hypothesis that increasing the number of parameters can lead to overfitting. We could see Significant improvement in performance after using LSTM-RNN instead of Vanilla RNN which can be attributed to the ability of LSTM to model long dependencies. Also, the bidirectional RNN achieved significant improvement of accuracy over the others suggesting that incorporating context of words around (of both ahead and back) of the word is very useful. We provide only 1 layer in our best model to be released along with the paper. 333Code available at: https://github.com/monikkinom/ner-lstm We show that the performance of Deep learning based approaches on the task for entity recognition can significantly outperform many other approaches involving rule based systems or hand-crafted features. The bidirectional LSTM incorporates features of varied distances providing a bigger context relieving the problem of free-word ordering too. Also, given the scarcity of data, our proposed method effectively leverages LSTM based approaches by incorporating pre-trained word embeddings instead of learning it from data since it could be learnt in an unsupervised learning setting. We could extend this approach to many Indian Languages as we do not need a very large annotated corpus. When larger labelled datasets are developed, in the new system we would like to explore more deep neural network architectures and try learning the neural networks from scratch.
https://deepai.org/publication/towards-deep-learning-in-hindi-ner-an-approach-to-tackle-the-labelled-data-scarcity
Applied Research Associates, Inc. is seeking mid-level Mathematical Modeler/Data Scientist for our office in Arlington, VA, to support a multidisciplinary team in the development, programming, and validation testing of mathematical and predictive models. The ideal candidate has a degree in mathematics or statistics, a solid science background, and demonstrated skill in programming mathematical/statistical models of complex biological systems as well as experience with machine learning techniques. The types of problems include predicting health effects following exposure to hazardous materials (e.g., chemical, biological), and the conducting analyses of biological data sets to derive new insights. Ultimately, we use these simulation models to predict the probability of adverse health outcomes, inform the development of mitigation strategies, provide input for medical planning purposes, and support research on the efficacy of treatment modalities. In addition, the candidate should have experience using scripting languages in mathematical software (e.g., Python, R, MATLAB) and/or open source options for prototyping models. Applicants selected will be subject to a government security investigation and must meet eligibility requirements for access to classified information; requirements include being a U.S. citizen. PRIMARY RESPONSIBILITIES - Review professional journals and publications to research models and techniques in mathematical simulation - Work with multidisciplinary project teams to develop mathematical tools suitable for predicting outcomes to health - Write source code that executes mathematical models and, as appropriate, develop scientific software for our customers - Develop and implement methods in sensitivity analysis and uncertainty characterization for the models that we develop - Working closely with senior staff, you will - Learn from other subject matter experts (e.g., human physiology) - Consider different technical approaches to a problem - Participate in the verification and validation of models, from developing test plans to documenting results REQUIRED SKILLS - Knowledge of mathematical modeling approaches, specifically those involving differential equations, to describe biological systems - Working knowledge of data science, particularly in the areas of machine learning and deep learning - Use cutting-edge techniques to analyze complex biological data sets to develop insights - Experience with basic data management in standard programming languages - Ability to collaborate effectively in multidisciplinary teams - Outstanding verbal and written communication skills - Familiarity with Microsoft Office products, e.g., Word, Excel, and PowerPoint, and LaTeX - Self-motivated, creative, willing to work as a member of a team, yet organized an ability to work independently REQUIRED EXPERIENCE - Bachelor’s Degree in related discipline, e.g., applied mathematics, statistics, computer science, and at least two years’ experience - Background in the sciences including chemistry, biology, and physics - Experience in working collaboratively with other disciplines to achieve project-level goals - Minimum of two years of experience in the development and application of mathematical solutions to health-related problems.
https://globalbiodefense.com/?post_type=job_listing&p=50296
Held annually in each of North Carolina’s six band districts, the Music Performance Assessment (MPA) is one of the most important events in the year of a school band program. The event is run by the District Bandmasters’ Association and is under the professional jurisdiction of the North Carolina Music Educator’s Association. Each band director selects the difficulty level, or grade level, of the pieces for each group to perform. The State Band Director’s Association annually publishes a list of the literature which may be used at District MPA’s. This list is reviewed annually and edited for validity. The march that each group plays does not need to be chosen from this list, but the other prepared pieces must be chosen from this master list. Each grade level is roughly equal to years of playing experience. For example, a group whose members have an average of four years of experience should be able to play Grade IV music. Because of the many factors which determine the rating of a piece of music this is only a rough guideline. For example, we often think only of very fast, technically challenging pieces of music as being “hard”, but soft, lyrical, slow music can be a tremendous challenge for the individual and the ensemble. performance for educated professionals at the end of a six-to-eight-week rehearsal schedule. The Central District Bandmasters’ Association (CDBA) chooses the site and the judging panel of four adjudicators for each year’s event. The judges are chosen from active and retired band directors from the middle school through collegiate levels. Occasionally a judge is a professional conductor or composer. Judges are considered based on their professional success, adjudication experience, experience with a variety of repertoire and overall musical knowledge. Three of the judges are “stage judges” and provide critical commentary and ratings for each band’s selections. The fourth judge comments on and rates each band’s ability to sight-read an unfamiliar piece. MPA is not a competitive event. We do receive a rating, but are not compared to other groups. Unlike athletic events and most marching band competitions, each band may enter in whatever grade level they like and are not scored compared to others in their category or grade level. Each group is rated according to how well it meets the musical challenges of the selected pieces in a variety of objective and subjective criteria. rating of the overall performance. This spoken and written feedback is a valuable tool to help the director and students to evaluate and improve individual and ensemble performance. Following the “stage” performance at MPA, each band moves to a separate room and, after a short preparation period, must perform a piece they have never seen before. This section, called sight-reading, determines if the students have developed fundamental music-reading skills and the ability to play with good musical style as an ensemble at first sight of a piece of music.
http://fvbb.org/about-us/mpa
This is a very cool journal update addressing the progress of Torment: Tides of Numenera - and it is absolutely worth taking the time to read through if you have even just a passing interest in the game's progress. Here is what they had to say: TL;DR: Sunken Market WIP Render, Adam on Effort, Kickstarters for Numenera: Strand and Underworld Ascendant, Job Openings, Colin at Rezzed Hello Tormented Ones, Thomas here. Today we want to talk about Effort, a key mechanic in Numenera and Torment that we have talked about before as part of our Difficult Task system. Additionally, there are a lot of interviews and posts with different Torment developers for you to dig in to. But first, if you read our previous update, you may remember we showed a concept art piece by Daniel Kim, showing the Sunken Market in the Oasis area. We thought it’d be interesting to show you the same area again, but this time as an early render: Sunken Market WIP render by Aaron Meyers and Damien Evans. Note: actual render, not a painting! Handling Effort Adam here. In the Torment forums, MReed asked some great questions about how we're handling the Numenera concept of Effort, specifically the UI for such a thing and how it will play. First, a refresher for those unfamiliar with Numenera. In the tabletop game, nearly every task a character attempts comprises of: (1) A Task Difficulty determined by the GM. This is a number ranging from 0 to 10, where zero is an automatic success (no roll). All other Difficulties, 1 to 10, are multiplied by 3, and the player must roll a d20 to try to beat that number in order to succeed. This means that any Difficulty of 7 or higher is impossible without help. (2) Possible Skills or external Assets the PC can apply to the task. These can lower the Difficulty by a maximum of 4 (two for skills, two for assets). This is enough to make easy tasks routine and impossible tasks possible, but just barely. (3) An applicable Stat (Might, Speed, or Intellect). The player can lower the Difficulty even further by applying levels of Effort to the task. Each level of Effort deducts points from the applicable Stat Pool and then lowers the task's Difficulty by one. PCs can apply a number of Effort levels more or less equal to their Tier—so from 1 to 6. So for example, the player might come across a devilish system of living wires that have embedded themselves into the flesh of a poor creature, and he wants to remove them without harming the creature. The GM decides that this is an Intimidating task (Difficulty 6), so the player must roll 18 or higher to succeed. However, the player's character is trained in healing and quick fingers, both of which (the GM rules) are applicable to the task, lowering the difficulty to 4. Now he only needs to roll a 12 or higher to succeed. The player also decides that he really, really doesn't want to accidentally kill this poor creature, so he spends four levels of Effort on the task (costing him 9 Speed—we'll talk about how that is calculated later). In doing so, he reduces the difficulty of the task to zero, and so he automatically succeeds (no roll). It doesn't take careful analysis to see that Effort quickly becomes more important than skills in terms of succeeding at difficult tasks (though skills allow a player to succeed at certain kinds of tasks more often). Not only does this system allow any character to attempt any task, but Effort also allows players to choose which tasks are most important to them and which tasks they're more willing to gamble on. In TTON, we handle tasks with an Effort dialog. Because Effort is a new mechanic—and a key mechanic at that—we decided to display the Effort dialog every time the player attempts a Difficult Task. "What?!" I hear you say. "You're telling me I have to click away this annoying pop-up every time I try anything?" Yes, that's what I'm telling you. But it's not annoying at all—the opposite, actually. Part of that is there aren't as many difficult tasks as you might think. Each task is uniquely crafted (that is, you won't be picking twenty generic locks in a row), so when there is a difficult task, the Effort dialog adds import to it, making every task a potentially significant event. You don't click the pop-up away. You make a real decision, every time. ("But can't I just reload until I beat the task without Effort?" You could, but in some cases you'd be missing out on content that is only available when you fail some tasks. And anyway, as I've said in the past, savescumming isn't technically any easier, it's just a different way to play.) What do you see when the Effort dialog appears? This: The difficulty of the task. By default, this difficulty appears as one of eleven abstract labels (e.g. Routine, Challenging, Impossible, etc.), but you'll be able to change this in the Game Options to show the actual target number (i.e. the Task Difficulty multiplied by 3) or to not show any difficulty at all. The adjusted difficulty of the task. If you have any skills or assets that apply to the task, then the initial difficulty will be visible but crossed out, and the actual difficulty (what you're trying to beat) will appear beneath it. Note that it's possible to have penalties, such that a task is harder than the base difficulty for some characters. That will be reflected here as well. When you mouse over the difficulties, a tooltip will display showing you what skills and assets you have that are adjusting the difficulty (if any). This way, we don't have to clutter the dialog with a bunch of text, but you can have access to all the information if you want it. An icon conveying which stat applies to this task. This determines which Stat Pool the Effort cost comes out of. Most tasks will only allow one stat: Might, Speed, or Intellect. In special cases (usually when the PC has certain abilities), a PC might be able to choose to replace the original Stat Pool with a different one. For example, a Jack with the Brute Finesse ability can choose to apply either Speed or Might to non-combat Speed tasks. An Effort slider. This allows the player to choose how many levels of Effort he will apply to the task. As he increases the slider, the Effort dialog will show him how much Stat Pool will be deducted and the adjusted difficulty will change to reflect the Effort he's applying. Sidebar refresher: The first level of Effort costs 3 from the applicable Stat Pool. Every level of Effort thereafter costs an additional 2. If the PC has any Edge in the applicable Stat Pool (another thing you gain each Tier), then his Edge is subtracted from the overall Effort cost. So if a player has 1 Might Edge and purchases two levels of Effort, it will cost him 4 Might (3 for the first level + 2 for the second level – 1 for his Might Edge). If the PC has 3 or more Edge in the applicable Stat, then the Effort slider will automatically be set to however many levels of Effort that PC can get for free. What about combat (I hear you say)? Aren't there a LOT more difficult tasks in that? There are. In the tabletop, Effort can be applied to every roll—and the player always makes every roll. That means Numenera players can opt to apply Effort to attack and defense. In TTON, the Effort dialog will appear for every attack you make. Our design calls for tactical combat, so each attack decision is already significant. And just like tasks outside of combat, the choice of whether to invest Effort adds to the significance of those decisions. Defense is different, however. The player is not deciding to be attacked, and the party will likely be attacked several times in a row. We didn't think the Effort dialog would be much fun in that case, turn-based combat or not. Instead, we're treating Effort on Defense as something you can set (or not) on your turn—a kind of defensive ability that every PC can use. Since most attacks are against Speed Defense, that will be the default Stat Pool used for Effort on Defense, but the player can optionally choose to apply Effort to Might or Intellect Defense instead. If a PC is using Effort on Defense, the cost will not be deducted unless they are attacked that round and it will be deducted only on the first attack. So you don't have to worry about what might happen if you apply one level of Defensive Effort only to get attacked by a swarm of steel spiders and lose all your Speed even though none of them actually hit. And of course, if a PC has enough Edge to get a free level of Defensive Effort, they will get that Effort all the time. Keep in mind that there is still a LOT of playtesting to be done, especially with combat. So the details of all this are still subject to change. But this is how we're thinking of it right now. So far, it's working pretty well. Adam out.
http://www.chalgyr.com/2015/03/an-update-on-upcoming-torment-game.html
The aim of this quick activity is to add some fun and challenge to the Speaking part 2 (comparing the photos) of FCE and CAE exams. Students choose useful expressions to use in this activity and are able to see how much they already know. The Task Step 1 Start by eliciting what students need to do in this part of the exam: - answer the question(s) about the photos - compare the photos - speculate about the photos Step 2 Draw a table on the board. Include as many columns as you have students. Step 3 Give each student a copy of the exam task. Ask them to identify the theme and main question(s) they need to answer in the task. Tell your students that instead of doing the task in a traditional way, they will be able to choose one of three different challenges, varying in their level of difficulty: Level 1: compare two photos Level 2: compare two photos and add some speculation in the same sentence Level 3: compare two photos, speculate about them and answer the question all in one sentence Given they are correct, they get 1, 2 or 3 points written in the table. You might ask other students to help you assess their peers. To make the game more difficult, tell your students they are not allowed to repeat whatever another classmate said. Take a look at some other exam speaking activities here, here, and here. Personal Experience I tried this activity with some groups and individual students. They were quite excited to approach the exam task from a different perspective and found it quite rewarding to challenge themselves to use the language they might have felt uncomfortable using before. I was positively surprised that no student settled for only one level of difficulty in the game and they all wanted to see whether they could reach Level 3. We also spent some time discussing what they found the most difficult and came up with some decent model answers. It was a great revision /extension activity that worked as an effective morale booster and a welcome change from the traditional exam grind.
https://www.lessonplansdigger.com/2017/06/05/fce-cae-speaking-part-2-challenge/
This post was originally published on GOOD.Is on 2 May 2017. I learned to teach in a nontraditional classroom. It rarely has a roof or walls, and my students are not always younger than me. My only direct lessons involve tying climbing knots or how to keep people safe in trust-based activities. I observe more than I lecture. Over the course of a day, if I am doing my job well, I listen more than I speak. For over 20 years now, I have been lucky to be a participant in, and then a facilitator for Challenging Outdoor Personal Experience, or COPE, a program in the Boy Scouts of America. That listening is a full-body activity becomes more apparent when the classroom of the day includes the wind blowing over a lake, redtail hawks soaring overhead, and squirrels chattering. When participants in COPE programs in Killingworth, Connecticut, leave the school bus or their cars, they walk over a causeway between a lake and a lagoon, and then up the dirt road into the field. Once there, they enter a new space where how they learn is turned on its head. Technically speaking, the methodology is pulled from the theories of John Dewey and Bruce Tuckman’s stages of group development—forming, storming, norming, and performing—but it falls under the ever-expanding umbrella of “team building.” We ask our participants, who range from scouts and school groups to college athletic teams and corporate groups, to be open to new experiences. At a time when 3.2 million kids are bullied every year, we also talk about what discounting—dismissing another person’s thoughts and feelings—means. Our no discounting policy is strict: Everyone has value and the ability to contribute. Everyone else can teach us something about our world and ourselves, even if we think we have nothing in common. Once we stop discounting and create space where everyone is empowered, we learn that we have far more in common than we might initially think. And then, we walk further into the woods. We talk about “leave no trace”—the idea that we can leave the outdoor space in a better condition than we found it, and act as good custodians so that the next group of people has the opportunity to enjoy this piece of wilderness. We talk about challenge by choice, and how this not only means that no one will be forced past their own boundaries, but also how their attitude in approaching challenges can determine what activity they might be offered next. Our activities include obstacle courses and brain teasers that build skills as we move through different sequences. As the degree of difficulty and risk steadily increases, so does the group’s reliance on each other. Reaching the final goal of rock climbing or completing a high ropes course becomes a progressive lesson in learning and practicing communication and reflection skills. Indeed, after every task, the groups debrief. They might be asked to reflect on something to be celebrated in another member’s efforts, or something they would change about their own. They might be asked to identify how they worked together and what roles they take on in the group, or where the learning moments were. Unlike a multiple-choice exam or a short-answer pop quiz, there are no wrong answers. The students are only building a tool kit that we hope they access after they leave. But most significantly, as each program closes and our staff comes together for our own debrief, we discuss our highlights—the things that could have gone better, and our own opportunities to learn. I have found that there is an intrinsic empathy necessary to teach students to push outside of their comfort zone, face their fears, and learn to see the world in a different way. The discomfort of not only forcing oneself to live another’s experience in a particular moment, but also to actively search for a way to help is one of the hardest things to overcome. Perhaps, as the needs of classrooms change and the world shifts beyond all of our comfort zones, this lesson is more relevant than ever.
https://michelleanjirbag.com/2020/08/31/why-one-educator-tells-bullies-and-their-victims-to-take-it-outside-way-outside/
What is resilience? the Harvard Business Review defines resilience as “the ability to recover from setbacks, adapt well to change, and keep going in the face of adversity.” When talking about resilience, we are referring to the ability to cope with the highs and lows and to rebound from challenges. In the workplace, this can be applied to an employee’s capacity to manage anything from a challenging workload to discouraging clients or colleagues. Resilience is a vital skill that we need to develop now more than ever due to the ever-growing demands of both the workforce and in life. It has been stated that people with greater resilience tend to be better at managing stress. “The moment we believe that success is determined by an ingrained level of ability as opposed to resilience and hard work, we will be brittle in the face of adversity” Joshua Waitzkin What are some examples of resilience at work? - The ability to overcome challenges and problems. Resilient people view difficulty as a challenge. They see failures and mistakes as lessons to be learned and an opportunity for growth. - Resilient people stay positive and can motivate the team/themself to keep going. Those who are optimistic tend to be more resilient and likely to stay positive about the future even when faced with seemingly demanding hurdles. - They remain organised and focused when things go wrong. Staying focused and controlled, allows them to place their effort where they have the most impact, therefore they are more empowered and confident. - Responds constructively to problems and criticism. There are several strategies to build resilience at work: - Adopt a growth mindset by embracing failure. Everyone makes mistakes drop the blame game and learn from mistakes to better yourself. - Recognise and reward resilient behaviour. When the easy option is to give up and you decide to power ahead recognise it and own that strong mindset. - Have confidence in your ability to overcome difficult situations. - Identify a supportive relationship, this could be a colleague or your manager.
https://hrambassador.com.au/2021/03/23/workplace-resilience/
If your recollections are anything like mine, they probably conjure up memories of being bored and only paying enough attention so as not to be caught unawares by a teacher’s question. This was true for me not only in foreign language classes but also in English literature. I am an avid reader and appreciate the classics but cannot enjoy one of the masters of twentieth century literature, D. H. Lawrence. I put this down to the painful treatment that my teacher inflicted on the author’s ‘The Rainbow’ in a secondary school in London forty years ago. I suspect that many readers share this experience: literature classes that do not open up a magical new world of discovery, never transport us to different times and places, and fail to show us that our innermost fears and deepest pleasures are shared by others. These are the very reasons why we need to read. Too often, our classes do exactly the opposite: they snuff out any flicker of interest. The same can be said of my French language classes; reading insipid texts about ‘la famille Bertillon’ and their day at the beach. I am not claiming that all my language classes were tedious or that the teachers were unremittingly unimaginative but there were too many of both. So, what is going wrong in our schools and universities? The first difficulty arises when teachers do not see a text as a piece of writing to be enjoyed but as a means to identify grammar. Stephen Krashen argues that it’s hard for learners to focus on meaning when the teacher’s covert aim is for them to identify relative clauses. In other words, we as teachers use reading lessons to teach syntax or lexis, not to get students thinking and talking about content. The next problem is that reading texts selected for inclusion in a textbook or syllabus are often teachers’ – and only teachers’ – choices. They don’t reflect the interests of the learners. But all the evidence suggests that if our aim in reading is to improve our language skills, it matters not the least little bit what we read. The issue is how much! Booker Prize winners and modern classics compare no better than a soppy romance or a gruesome tale of flesh-eating zombies when it comes to second language acquisition. The trick is to let our students read what they want. At least then there is a chance that they will be turned onto reading. This is true even if learners are allowed to choose the level of difficulty of a text for themselves. Learners lacking confidence in their language skills may feel more comfortable with a text which is way below their level of competence. Besides, they do not stay with books like that for long: they get bored and move on to more challenging works. But teachers routinely and wrongly use texts that are difficult for learners: Krashen’s i + 1. In fact, though, the great man explicitly stated that, when reading, students should choose a linguistic level well within their comfort zone. A much-quoted rule of thumb here is the five finger test: get students to hold up five fingers and drop one every time they come across an unknown word. If they cannot reach the end of a page before there are no fingers left standing, the book is probably too hard for them to enjoy. But why all this emphasis on reading anyway? Because to progress from an intermediate level to a more advanced one demands reading. Take grammar as an example. Just about everything we teach to students is only partly right. For instance, we use adverbs of frequency only with simple tenses (I always go to the park on Sundays), not with continuous ones. Don’t trust teachers on this one: they’re always teaching half-truths! And what about never putting a future tense directly after ‘if’? If you will insist on doing that, you’ll certainly lose marks! I could continue, but let’s look at vocabulary instead: ‘sick’ and ‘ill’ are synonyms – so why can’t we say “That doctor is great with ill children”? In short, language is too complicated to be described and practised in its entirety in a textbook or by a teacher. Only by reading do we develop our skills to proficiency level!
https://www.readlistenlearn.net/blog/Where-are-we-going-wrong-with-teaching-reading-skills
Whilst growing up in Cornwall I played cornet for St. Dennis Band. My friend Nick Hitchens played the euphonium and eventually tuba for St. Austell Band. We often competed against each other either in our respective bands or as soloists. It was whilst we were both studying at St. Austell 6th Form College that I wrote this piece for us to perform at the end of year concert in 1984. ‘Fantasy for Tuba’ was written to showcase Nick’s amazing talent. Even at the age of 17 he was a mature and musical performer with great technique, dexterity and flare. We both went to London a year later to pursue our musical education. Nick went to study tuba at the Guildhall School of Music and I went to study piano and composition at the Royal College of Music. In 2020 I rediscovered the original manuscript for ‘Fantasy for Tuba’. As well as being technically challenging the piece incorporates romantic themes. After a testing cadenza for the soloist ‘Fantasy for Tuba’ builds to a dramatic climax with the statement of the final majestic theme bringing the composition to an end.
https://www.angilley.com/product-page/fantasy-for-tuba-piano
The purpose of this study was to determine if it is feasible to simulate the permeability of drugs and toxic agents across human-tissue membranes using computer-generated models of various human tissue types. Recent studies postulated the existence of microscopic domains of specific lipid compositions within the cell membrane. However, due to the absence of critical supporting data about the actual lipid compositions, we were unable to conduct the computer-based simulations needed to construct the membrane models. When necessary data to construct these simulations becomes available in the near future, the resulting tissues models may aid in the understanding of how these lipid compositions, and thus membrane behavior, vary throughout the body. We believe that this knowledge can have a significant impact on the drug design and development process. Background and Research Objectives One of the most critical cellular processes that controls almost every aspect of life is the signaling that occurs from one side of a cell membrane to the other. However, lipid composition can alter the membrane fluidity, thickness, flexibility, and electrostatic properties. Different tissue types contain different compositions of lipids in their cell membranes; thus, cell behavior (and response to external stimulus) can vary in different parts of the body. Recent studies postulated the existence of microscopic domains of specific lipid compositions within the cell membrane. It is vitally important to understand how these lipid compositions (and, by extension, membrane behavior) vary throughout the body. In addition, the toxicological outcome of the administration of a drug or the intake of a toxicant is determined by its concentration in the body over time. Accurately predicting the distribution of these substances is largely controlled by membrane permeability through the different tissues and organs. The purpose of this project was to explore the feasibility of using coarse-grained and atomistic computer-based simulations to model human-tissue membranes by changing the lipid compositions. We planned to model lipid membranes to mirror the lipid composition of organs in the body and simulate the permeability of drugs and toxicants across those membranes. The simulations could then be compared to experimental results to validate the minimal lipid compositions that accurately model the tissue types. This knowledge may have a significant impact on the drug design and development process, which is currently estimated to take as long as fifteen years. During this process, 90 percent of drug candidates fail in clinical trials because of toxicological effects or by proving to be ineffective in treating the target disease. Scientific Approach and Accomplishments In FY16 we (1) created two models of a human brain-cell membrane composed of sixteen and sixty different lipid types, respectively, (2) ran these models at a coarse-grained resolution using the MARTINI coarse-grain force field for biomolecular simulations, (3) completed the coarse-grained simulations for human-brain cell membranes, and (4) converted the coarse-grained representation to an atomistic level for the 16-lipid model. We projected that the atomistic simulations would continue to run into the next year. In FY17 we planned to (1) continue the atomistic simulations of sixteen lipid types in human brain-cell membranes, (2) build the models for the heart and liver, (3) run these new models at both the coarse-grained and atomistic level, and (4) select one drug for permeability simulations in brain, heart, and liver tissues. However, for reasons cited below, we were unable to realize these goals. The feasibility of creating lipid compositions of different human organs has been quite challenging due to the limited amount of data currently available for human plasma membranes for specific organs. The progress of our work was hampered by the limited availability of critical supporting experimental data: specifically, the absence of actual lipid compositions of human tissues placed critical limitations on this project. To create the models for each human tissue type, the lipid composition of the inner and outer leaflets of the plasma membranes for each tissue must be known. In most current studies in this area, the lipid composition of the tissues and cells are reported for the whole cell, not just the plasma membrane. The whole cell includes other membranes (such as the nuclear and mitochondrial membrane), so the lipid composition is not just of the plasma membrane. In addition to the difficulty of obtaining a mixture of membrane compositions, the outer and inner tissue leaflets are extremely difficult to separate and determine the lipid compositions quantitatively. New methods of creating artificial lipid bilayers wherein lipid compositions are quantifiable have yet to be designed for each leaflet, so quantifying the lipids in each leaflet is still difficult. Finally, data about the lipid compositions of plasma membranes for the ten major human organs simply are not available currently. Given that lack of information, the only models we were able to create were of the human neural-plasma membrane because the lipid composition was reported in currently available literature. We were able to create both complex and simple models of just the human neural-plasma membrane using the MARTINI coarse-grained model. The complex model was composed of fifty-eight different lipid types, and the simple model was composed of sixteen lipid types. Each system was run for 40 milliseconds, showing the systems to be stable and mixing. As the systems are run for a longer period of time, we expect that some of the lipids would begin clustering together and represent a true biological system. Impact on Mission Future work developing the human-tissue membrane models would support Lawrence Livermore National Laboratory's core competencies in bioscience and bioengineering by determining the feasibility of computational testing and prediction to identify better drug candidates faster. The membrane models developed by future work may be used to simulate exposure to unknown chemicals to predict human outcomes and would be relevant to chemical and biological security. Support of high-performance computing, simulation, and data science at the Laboratory would be realized through the development of new models and protocols that could be applied to other high-performance computing efforts. The anticipated large size of the simulations will require newly designed workflows for simulations of molecular dynamics. Conclusion The stated goal of our feasibility study proved to be too ambitious, given the current level of research in the field of lipidomics. The difficulties of compiling a suitable mix of membrane compositions and in determining the lipid compositions of tissues quantitatively, as well as the absence of data about the compositions of plasma membranes for all of the major human organs, were significant factors in our inability to realize all of our project's goals. However, with the continued growth in the field of lipidomics, methods for growing excess plasma membranes from living cells are being developed, so that new experimental data will be available in the future.
https://ldrd-annual.llnl.gov/archives/ldrd-annual-2017/bioscience/16-FS-007
Topic : How to use logos in an essay. Author : Elwen Gibson. Published : Fri, Mar 22 2019 :9 AM. Format : jpg/jpeg. To know how to write an essay first and foremost you should identify the type of essay you are about to write. When we talk about the essay types, in most cases we deal with the following: For and Against Essays, Opinion Essays, Providing Solutions to Problems and Letters to the Editor. Look at what you have read for each of the main points of your essay and work out how you can talk about it in your own words, or in a more informative way. Look at your essay research notes and decide for yourself if the writers have made claims which, in your opinion, lack substance. If necessary, compare different claims and write down which of them is more valid, in your opinion, and explain why to your reader. Once you start to break it down in this way, you can see that learning how to write essays is not overwhelming - all you have to do is write a short piece of text for each of the ideas you are presenting. Once you have the structure written down in note form, with the number of words for each paragraph, you can start to work on the details of your essay content.
https://www.ukbestpapers.com/how-to-use-logos-in-an-essay/christopher-rabon-argument-social-media-digital-how-to-use-logos-in-an-essay-1550683/
Summer Dance Diary: Week FOUR Student McKenna Collins writes about her fourth week at MCB School’s Summer Intensive Dance Program in this week’s Summer Dance Diary! Dear Diary, We have just completed the fourth week of classes at Miami City Ballet (MCB) School’s 2013 Summer Intensive Program and I’m feeling very accomplished and inspired! Coming into this program as an Apprentice with Madison Ballet in Madison, Wisconsin, I was eager to begin learning from such a renowned school. After having finished the fourth week, it is clear to me as to why it is one of the best schools in the country. From the spacious facilities to the worldly teaching staff, this program has been an incredible learning and growing experience…and it’s not even over yet! This is my first summer with Miami City Ballet as well as my first time in Miami. Coming from a state where you can drive for miles and only see fields of corn and lots of cows, it’s a very different atmosphere here in the city! From the moment I stepped into the beautiful studios just off of South Beach and met all of my wonderful teachers, I knew this would be a great summer. Having participated in various summer intensives, such as Ballet Chicago, Pacific Northwest Ballet and Pittsburgh Ballet Theatre, I was looking forward to continuing my Balanchine training here at MCB School. We have been covering a wide range of styles from classical to neoclassical, and learning from some of the most experienced dancers in the world. But, with experience comes expectation! Week four has been the toughest of them all so far. The teachers are really challenging us to perform at our highest level possible every day and to strive for more in every class. It’s that kind of support and encouragement that motivates me to work hard and be the best that I can be! One of my favorite parts of MCB School’s Summer Intensive Program is the amazing faculty that we have had the privilege of studying under! Each of them is so accomplished and gifted — it is humbling to be able to take class and receive corrections from them. Something that I have noticed in most of my classes here is the emphasis the teachers put not only on making each movement technically correct, but also on dancing the movements. Being in such a prestigious environment has motivated me and fueled my passion for dance even more! As the weeks progress, so has our repertory piece come together. It’s an honor to have Ms. Maribel Modrono as our Repertoire choreographer. She has set a beautiful neoclassical contemporary ballet piece on our level, which we are very excited to perform in just 6 days. She is so creative and passionate about what she does — it is truly inspiring to learn from her! After a long day of rehearsals, my roommate and I love nothing more than being able to walk right out the doors and on to the beach! We spend our free time shopping on Lincoln Road, trying new foods and of course, soaking up the sun. Celebrating my birthday here was also a once in a lifetime treat. Overall, MCB School’s Summer Intensive Program has been an eye opening journey for me that I will never forget. The friends I have made here I will cherish forever. The corrections I have received I will take back home with me and continue to perfect. I have grown as not only a dancer, but also as a young woman here this summer and had the experience of a lifetime! Yours truly,
https://www.miamicityballet.org/insider/summer-dance-diary-week-four
The variation cycle and the fugue are genres in which Max Reger developed a special mastery. Thus one can speak of Reger’s unique Bach Variations op. 81 as the culmination of his piano oeuvre, offering everything that is typical and characteristic of Reger’s style. No other piano work by this great Bach admirer can measure up to this monumental composition in terms of sheer size and inner weight. Written in Munich in 1904, the Variations are considered challenging to play and difficult to comprehend; but aspiring pianists should not let themselves be deterred by this legend. Behind the complex notation are a clear structure and a wealth of expression. Technical complexity and aesthetics here correlate in an almost classical manner. Content/Details - - Level of difficulty (Explanation) - Other titles with this level of difficulty Youtube Preface Critical Commentary About the composer Max Reger Late-Romantic composer who combines a chromatic tonal language with Baroque and Classical forms, thus anticipating 1920s neoclassicism. |1873||Born in Brand (Upper Palatinate) on March 19, the son of a teacher. First piano lessons from his mother.| |1888||After a visit to Bayreuth (for Meistersinger and Parsifal), decides on a career in music.| |1890–93||Studies with Hugo Riemann at the conservatory in Wiesbaden, composes chamber works. Thereafter he endeavors to publish his own works as a freelance composer, albeit with multiple failures.| |1898||Return to his parents’ home in Weiden. Composition of organ works: choral fantasies, “Fantasy and Fugue on B-A-C-H,” Op. 46 (1900); Symphonic Fantasy and Fugue (“Inferno”), Op. 57.| |1901–07||Living in Munich.| |1903||Publication of his “On the Theory of Modulation,” causing Riemann to feel attacked because Reger espouses a different understanding of the role of chromatics. “Variations and Fugue on an Original Theme,” Op. 73.| |1904||Breakthrough with his first performance for the Allgemeine Deutsche Musikverein (General German Music Association). First volume of his “Simple Songs” for voice and piano, Op. 76; String Quartet in D minor, Op. 74, one of the most significant works in that genre at the beginning of the century.| |From 1905||Instructor at Munich’s Academy of Music. “Sinfonietta” in A major, Op. 90.| |1907–11||Music director and professor of composition at the University of Leipzig. Orchestral work “Variations and Fugue on a Theme by Hiller,” Op. 100.| |1909||“The 100th Psalm,” Op. 106, his most popular choral work.| |1911–14||Director of the royal court orchestra of Saxe-Meiningen.| |1912||“Concerto in the Old Style,” Op. 123. Orchestral song “An die Hoffnung” (“To Hope”), Op. 124.| |1913||“Four Tone Poems after A. Böcklin” for large orchestra, Op. 128; “A Ballet Suite,” Op. 130.| |1914||“Variations and Fugue on a Theme by Mozart,” Op. 132| |1915||He resides in Jena. Late compositions.| |1916||Death in Leipzig on May 11.| About the authors Mit dem Opus 81 schuf Max Reger sein Hauptwerk für Klavier, wenn man es nicht gar als ein Hauptwerk seines Gesamtschaffens bezeichnen muß. Es bietet alles, was für den Stil Regers typisch und charakteristisch ist. Innerhalb seines Klavierschaffens sind die Bach-Variationen einzigartig. Kein anderes Werk kann sich an äußerem Umfang und innerem Gewicht damit messen. Reger's compositional style is well defined in his "Variations and Fugue on a Theme from J.S. Bach" edited by Egon Voss. It stands out as a large-scale piano work among his mostly single-movement pieces. This technically demanding piece was dedicated to and premièred by pianist August Schmid-Linder in 1904, who referred to it as "unnerving at first glance". This work can be considered one of Reger’s greatest achievements for the piano ... This Edition, enclosed between the familiar blue covers of Henle Verlag is characteristically thorough in detail and clear in print.
https://www.henle.de/us/detail/?Title=Variations+and+Fugue+on+a+Theme+by+J.+S.+Bach+op.+81_493
I'm going to challenge your premise a bit - why not drop XP-based levelling altogether and use milestone levelling instead? In my time as a DM and a player, I've found milestone has a few advantages: Less resource management. Counting all your XP is a bit tedious. Less DM work. You can tailor encounters that are fun and play to your party's strengths, and ... 55 Carcer and I can independently verify that all your math checks out. That said, take a deep breath. I doubt this question was prompted by your lack of faith in your own math and reading skills, but instead by your DM's insistence that this encounter wasn't deadly, and this much bold formatting and all-caps in a question (prior to style edits at least) makes ... 32 I'll step through each of the classes individually, but first the broad strokes: [Most] Spellcasters will fare much better than everyone else The main check on the power of a Spellcaster is their limited resources. If a Level 9 character uses a 5th Level Spell Slot, that's it: that's the only fifth level spell they'll get for the whole day. Wizards and ... 26 At level 2 (on average). From here, CR tells you the upper maximum difficulty of the monster, assuming a party of 4. Since the Mimic has CR2, it's a challenge for a level 2 party. That being said, you can easily adapt this. After a boss fight, the level 3 party is low on resources, and finding a mimic instead of loot can be a challenge for them. ... 24 You were just over "1 day's XP budget" (five minute adventure day) In the Basic Rules p. 166/DMG p. 84, there's another table that lays out the estimated "adjusted XP" for an entire adventure day. For your party: 7th /5,000/ x 4 = 20,000. (Your adjusted XP calculation was correct at 23,400). The usual adventure day is designed with "6-8 encounters of ... 21 Surprise is very much up to the GM How does surprise work? The section on "The Order of Combat" states: COMBAT STEP-BY-STEP 1. Determine surprise. The DM determines whether anyone involved in the combat encounter is surprised [...] The section on "Surprise" states: The DM determines who might be surprised. If neither side tries to be stealthy, ... 17 Yes, typically award the full encounter XP The Monster Manual says you gain the XP for "defeating" monsters. Typically this means killing them but, as it also says: ...the DM may also award XP for neutralizing the threat posed by the monster in some other manner. Now, I can't speak for other groups or DMs in order to tell you what is "standard", but ... 15 Every three turns, unless the DM decides otherwise This answer presumes that the only material you are using as a reference are the PHB, MM and DMG. If you also have the Holmes Basic Dungeons and Dragons book (which was published as a precursor to AD&D as the game evolved) the answer is in that tome. How did I come up with that answer from the DMG? (... 15 The answer to this is slightly more complicated than I think most would like. There's no easy way to calculate the total CR of an encounter; CR is one factor used in assessing Encounter Difficulty. (Which appears to be your actual question, assessing encounter difficulty). You (1) use CR to calculate the XP value of the encounter, and (2) then compare that ... 14 I'm not certain that describing the fight as "Deadly" for your level is wholly inappropriate; however, there is some Math we need to knock out first. Frost Giants are weaker than their Challenge Rating would suggest In the Dungeon Master's Guide, the advice for creating custom monsters provides advice on how to stat creatures based on what their Challenge ... 13 Every encounter with an intelligent creature can be a social encounter In some cases, it is really smart for your PCs to first parley before deciding to get violent. You need to discuss with your players what their operating mind set is: when they meet a bunch of humans or humanoids (you don't need to say "you see seven bandits") what is their first ... 13 Attacking the camp is never the only option There are a lot of factors that go into determining the balance of encounters in D&D. The CR guidelines, players skill, party composition and often the DM gut instinct all play a role. Without being in your DMs head or even at your table we can't tell you if they are stacking things against you. However we can ... 11 Maybe, maybe not This party make up is far from ideal, but it might be doable. How doable will depend on how much guidance they accept from you, as the DM. The main problems Healing You know this already, there is very little healing in the party, and none at level 1. There are a couple ways to mitigate this, however. Give them healing potions. Have ... 10 The question is broad, so I will be giving a broad answer, without entering the specifics of each one of your sub-questions (as I feel that actually requiring an answer to each would result in closing the question). The general answer is: reliability gets punished, bursty stuff gets rewarded. With that in mind... Which classes are most affected? Classes ... 10 "Mathematically", this is beyond a "deadly" encounter, but that doesn't mean the DM planned on killing you. Using the tables in the DMG (page 82): A "hard" encounter for 4 level 8 characters would have monsters worth 5600 XP, a "deadly" encounter 8400 XP. 20 "minions" = 2000 (at 1/2 which would be very beefy for their CR seeing as other 1/2 CR has around ... 9 You will probably struggle with the adventure as written, because of the newness of the players as much as the composition of the party. I ran Lost Mine of Phandelver for a brand new player, as her first experience with D&D -- or indeed with any roleplaying game. She wanted to play a genasi wizard. This was a duet experience (i.e., a DM and one player), ... 8 Encounter Design Xirema and KorvinStarmast have done an excellent job in analyzing your particular encounter, but I want to get a bit more into encounter design and the tools for calculating difficulty. In my experience as DM and player, I've found that the encounter calculators are really not all that useful in terms of what's reasonable during a day. ... 6 No. An encounter does not start, initiative is not rolled, and surprise is not determined, until some combatant has reason to attempt to initiate it. For all combatants to be surprised, that must mean that no combatant has noticed a threat. And if no combatant has noticed a threat, why is initiative being rolled? There is an important distinction to be ... 6 There are a few techniques that I've tried myself which have been effective for scaling combat challenges up or down, and they should apply to series of encounters just as well. 1. Adjust the number of enemies in combat The action economy is a big deal, especially when enemy groups mix types of enemy to allow more possible combinations of actions the enemy ... 6 Most likely not In fact, I can say with reasonable certainty that they won't even really survive the very first encounter "without too much trouble". First level characters are notoriously squishy and being outnumbered at level 1 is a very good way to get dead. The very first encounter involves 4 hidden goblins who will very likely surprise at least 2 of ... 6 Make subsequent encounters harder Disclaimer: this is based on my experience as a GM running my own adventures, not LMOP It seems like you're afraid of giving your PCs too much power, allowing them to breeze through the end content of the module. This is a valid concern, as the players might get less engaged in the game. If you see that this is the case, ... 5 Quite A Lot Minimum Draw: You don't have one, so anyone with an Initiative modifier of -2 or lower gets dealt ZERO cards. They don't get to act in combat. At all. That's silly and needs to be corrected. Slow Dealing: Dealing out 3+ cards to every player, NPC, and creature will consume a surprising amount of time. No idea if you ever played Deadlands Classic,... 4 Based on my experience running a sandbox game in OSR, Pathfinder and D&D 5: Intelligent monsters You place some intelligent, and not uncompromisingly bloodthirsty, creatures in your marches. Most intelligent creatures care for their life and are unlikely to attack on sight without provocation. Therefore, they can be talked to, in principle. Then it is ... 4 Challenging experience that will need slight changes I am currently in the middle of playing LMoP as a first time DM for first time players with main party of: Gnome Wizard Tiefling Rogue Halfling Fighter My overall experience has been that with some changes to enemy behaviour it is quite fun for those who seek more challenging type of play. As you can ... 4 A Small Guide to Socializing and Domestic Encounters It is common for D&D 5e to lead us down a path that everything in the Monster Manual is innately a Monster. The lore behind each monster entry is pared down from previous editions and the stats emphasized. But it is not the DM's job to make a bunch of combat encounters - it's the DM's job to make a ... 4 How to do it RAW Using the Creating Encounter section from the Dungeon Master's Guide (page 81) a hard encounter for a single 5-th level character has a XP value between 750 and 1,100. For a single character (or for two) the Party Size adjustment (DMG 83) says to use a modifier of 1.5 on the monsters XP value. Reverse engeneering a bit; our XP budget for ... 4 When adventurers defeat one or more monsters-typically by killing, routing, or capturing them-they divide the total XP value of the monsters evenly among themselves. If the party received substantial assistance from one or more NPCs, count those NPCs as party members when dividing up the XP. DMG 260. The Question is: is the encounter solved, the threat ... 4 Tied to D&D If you're unable to escape the gravitational pull of the most well known name of Table Top RPGs then I have a few suggestions: Reduce HP/AC but increase Damage and To Hit (on monsters). This will allow your players to hit more often, kill more often, but the monsters have the same advantage. Use this solution if you want combat to have the ... 3 Encounter difficulty might depend heavily on the method and scenario... The rules on encounter design and "expected" XP values are explicitly described as guidelines, and while the default intent seems to be to award the same amount of XP regardless of the method, they also caution that effective encounter difficulty may vary, and that XP might have to be ... 3 With their goblin masters gone, the wolves' dispositions toward strangers is probably at best indifferent, unless they were given a command like guard before the goblins left, in which case they'd be at best unfriendly toward anyone but the goblins. The Handle Animal skill won't improve their dispositions toward the party; that's what the extraordinary ...
https://rpg.stackexchange.com/tags/encounters/hot?filter=year
On Musical Excellence Three weeks ago, when handing out this year's music-to-be-learned to students, I gave a kid - a freshman in high school - Elgar’s Cello Concerto and told him to start learning notes. What was I thinking? This is a piece I didn’t begin learning until college and, though every angsty middle school kid tries to play it, I remember my learning process being laborious and unsatisfying - because in college, I (like any/every middle school kid) wasn’t really good enough to play it. But, my student really wanted to take a stab at the piece. He wouldn’t shut up about it. So after forcing him to hack his way through Haydn’s C Major Concerto as a prerequisite, I relented. And here we are. We musicians love to tout the life-long and academic benefits of a solid musical education. The National Association for Music Education has a big ol’ list of those benefits on their website. Personally, I’m tired of justifying music education by citing all the non-musical benefits, and I’m not the only one. Despite all that is great about a musical education, there is a problem: if we continue to justify our relevance this way, and we continue to expect parents to see these touted results of their kids’ music study, we have to actually teach music correctly. I’ve written something about this before, but recently I’ve been seeing more examples - in school orchestra and ESPECIALLY in private instruction - that makes me believe that it isn’t happening. By justifying music programs and music lessons with less artistic, more academic reasonings, performers and teachers cheapen their subject area while simultaneously holding themselves to an almost-impossibly high standard. How can teachers and programs succeed in creating great musicians who also have great test scores and critical thinking skills if (I’ve noticed) programs spend a large amount of time competing with each other to get the best adjudication scores, have the most All-State students, and - especially - to attempt to play the hardest music. Too often, I’ve heard good high school orchestras and high school student soloists perform repertory that is beyond the individual orchestra's and performer's technical capabilities. They perform these pieces to an adequate - though musically substandard - level. There’s no musical brilliance at play, and no deep dive into the intricacies of the piece. There’s no phrasing or appreciation of the music theory behind the dots and lines. This happens in my private teaching ALL. THE. TIME. I’ve had high school students come to their first lesson with terrible bow hand position and no sense of pitch or fingerboard geography, but they’ve “learned” a major concerto, like Dvorak or Tchaikovsky, in hopes of living up to unrealistic and incorrect expectations set forth by their parents or their peers. These students aren’t enjoying those perceived benefits of a musical education set out by NAfME and music teachers everywhere; they are just barely getting by, if they are at all, in the process. I’ve met some teachers who teach their students by rote - call and response, usually - when teaching rhythm and pitch, rather than forcing students to decipher complex rhythmic arrangements or the organization of notes on the staff. The students aren’t improving their pattern recognition or spatial intelligence if they’re being spoon fed the answers to these problems. Far too many teachers allow their students to get away with ignoring various parts of musical notation, like sharps, flats, pitch names, and more, i.e., not actually reading music. In the same vein, I’ve taught quite a few students who are so afraid to play wrong notes or make bad sounds that they never take technical or musical risks when (after they’ve supposedly learned all the notes and rhythms) I force them to think about phrasing or sound production, and even proper finger placement and hand position. I've given trial lessons to new students who can “play” advanced repertory but can’t sightread a piece written for third-graders. Give a student a piece of music that is too hard for them, and they will be lucky to rise to the level of their current competence. Few high school cellists should be performing a concerto by Schumann. Almost no high school orchestra should be playing symphonies by Shostakovich. The day that I hear a high school student working out a solo piece by Lutoslawski is the day I tip my cap to their "adventurous" programming, and then permanently skip town and quit this job. My thought process is pretty simple: by giving students music that is too technically AND/OR musically difficult, you are setting the student up for failure. Programming or assigning music on the basis of “exposure” to a composer’s great work, the instructor’s love of the piece, or expansion of the student's/orchestra's repertory list is just an excuse for allowing mediocrity. Try exposing the kids to the old masterworks using YouTube or Spotify, and we’re not in a hurry to play pieces we love; there’s plenty of time - a lifetime, in fact. Also, nobody cares about repertory lists except for booking agents 30 years ago. I don’t mean to imply that students can’t take these difficult pieces and “pull it off,” but that shouldn’t be the level of expectation that we are setting. Some of my worst memories of my doctoral program came from the university symphony orchestra dress rehearsals, when the conductor would say - EVERY TIME - at the end of the rehearsal, “Well, there’s no problem a performance can’t fix.” This comment always rang as an acceptance of our collective failure, often as a result of poor, thoughtless programming. By handing inexperienced underclassmen a Schubert quartet, the instructor is telling the students that s/he knows they are going to fail, but doesn’t care. At least the students will get the (bad) experience of (poorly) performing the piece. I’m constantly inspired by teachers who step foot in the studio or school everyday for the pure joy of seeing students play music and watching them succeed. But I don’t understand why some of them aren’t setting a standard of true musical success. So many are handing out difficult concertos and symphonies just for the sake of doing it. There’s more to learning a piece of music than playing the notes and the rhythms. In fact, there’s much more! I am both guilty of this mindset and a victim of it. I assign students music from the 20th- and 21st-century because I believe it is necessary for them to be playing things that aren't the old warhorses. But, few students are prepared to dive into the musical depths of one of Britten's Suites for Cello, and even fewer have the technical capabilities to "pull it off." Back in high school and college, I played music that was way too hard for me just so that I could say that I had done it: I ignorantly used the Dvorak Cello Concerto as my youth orchestra audition piece when I was in tenth grade! (Almost 20 years later, I’m just now feeling good enough to do it justice.) I programmed recital repertoire because I liked that way certain pieces sounded on the recordings: as a junior in college, I played a program of late-Beethoven and Shostakovich sonatas with a pianist who banged on the keys while I hacked my way through the notes. It didn't even occur to me to read musicological writings on these pieces, and I definitely didn't do any score study before I started learning. My private students want to play harder and harder music, which I assume they want to do so that they can keep up with their friends and peers, who are themselves playing harder and more challenging music. School orchestras that can’t really play in tune are playing Beethoven symphonies. Young quartets are working on Brahms when they should be playing Mozart. Students' (and some teachers) definition of "hard music" is different than mine. I believe that students can still be challenged even if they are playing what is colloquially (but incorrectly) perceived as “easier” music. I think that musical standards can be raised if we pay just as much (more?) attention to the subjective side of music as we pay to the objective side. It seems like it should be easy: instead of playing this "hard" music and worrying about notes that students don’t/won’t learn, or a specific, technically challenging section that will nevereverever sound good, students will be able to “master” a work that is technically simpler, but musically complex. Rather than trying to learn how to play the notes up to the day of the performance, students finish the basic outline of the piece (notes, rhythm, etc.) long before the performance, and spend the remaining time thinking about, struggling with, and mastering musical considerations. For example, let’s take Bach: his Six Suites for solo cello are well-known, and generally aren’t as technically challenging as other, later works for unaccompanied cello, like Dutilleux's Trois strophes. Playing the First Suite gives an advanced student the opportunity to delve into the mysterious phrasing and interpretation of Bach’s music. They have the time to ponder the theoretical importance of this note, or that agogic accent. They are given the freedom to experiment with phrasings and bowings that drastically alter the motion and feel of the piece. If they were spending all of their time learning difficult notes and practicing difficult shifts, they’d never get to this point. Nobody judges Yo Yo Ma's advanced musical capabilities when he plays the first Suite. Haydn’s opus 20 string quartets are NOT easier than Schubert’s ‘Death and the Maiden.’ In fact, these pieces are equally difficult, but in very different ways, requiring very different technical and musical considerations. Both works are hard, both have complications, both are trouble. It’s just a different kind of trouble. The Emerson Quartet has released recording after recording with Mozart and Haydn quartets. Nobody second guesses their mastery of the repertoire. Playing a Haydn (rather than Shostakovich) symphony in your school orchestra or a Mozart (rather than Tchaikovsky) concerto in your private lesson isn’t failure - it’s a natural step along the path to technical and musical mastery. Top notch musicians are far more impressed when you play an "easier" classical extremely well - stylistically, technically, and with thoughtful musicality - than when you almost pull off one of the "harder" romantic concertos, with mostly correct notes and approximate rhythms. I don't expect there will ever be a day when someone criticizes the Vienna Philharmonic or Anne-Sophie Mutter for performing a symphony or concerto by Mozart. Anyone who is assigning music to young students must stress that there is more to the music than objective standards, like notes and rhythms. The cello part in a Haydn piano trio may be simple, but it's not easy. Subjective things, like phrasing, tone, articulation, and interpretation inspired by music theory and score study are all required to make a great performance. The ultimate goal of any performer is to have total command over the technical difficulties of a piece so that they have the freedom to make decisions that best express their musical interpretation of that piece. Programming music because you like it, because others will be impressed that you attempted it, or the piece's difficulty rating on the American String Teachers or Royal Conservatory string syllabus rather than the possibility of musical excellence, stunts students’ musical growth, and all but eliminates the non-musical benefits of a musical education. When a piece of music is too technically difficult for students, the intellectual and creative benefits of learning and performing music take a back seat to regular, just-like-every-class, no-benefit struggles. It’s not just the notes. It’s not just the rhythms. We educators need to stop allowing our desire to expose students to hard music trump what should be our ultimate goal in music education: promoting complete musical excellence at the highest level.
https://www.justin-dougherty.com/blog/2018/9/6/on-musical-excellence
There is understandably a lot of comparison and confusion between different financial qualifications. CFA vs CPA is no exception – they even sound similar! We have had chartered accountants share their CFA experiences before, but in this article we will specifically compare the CPA and CFA charter in all aspects: career paths, exam formats, exam difficulty, salary differences. This analysis has been a long time in the making, and we think we’ve come up with a definitive comparison. Let’s dive in! - CFA vs CPA: Career Path Differences - CFA vs CPA: Exam Differences - CFA vs CPA Difficulty - CFA vs CPA Salary Differences - Which is Better, CFA or CPA? CFA vs CPA: Career Path Differences The CPA is fundamentally a qualification for those on an accountancy-focused career path, with job roles such as accountants, comptrollers, financial managers, CFOs, etc. The CFA charter is for investment management and advisory, such as investment analysts, portfolio managers, investment strategists, consultants, wealth managers, etc. CPA = compulsory, CFA = not compulsory: An important difference is that CPA is often compulsory, or at the very least heavily-encouraged in relevant roles, whereas the CFA charter, while encouraged in the investment industry, is not compulsory. CFA vs CPA: Exam Differences Differences in Curriculum Focus The CFA curriculum focuses on investment management and valuation – aspects that help investors make investment decisions. The CPA curriculum focuses on auditing, financial accounting, regulation – ensuring companies accurately reflect their business performance and financial health. This anecdote about CPA vs CFA might help: CPAs prepare and audit financial statements, while CFAs read and analyze the financial statements. Depth and breadth of material The difference between CPA and CFA in terms of depth of material is about the same, when comparing similar topics such as pension accounting and FRA topics. Generally speaking when comparing similar topics, neither one would be harder than the other. I would say that the material goes into similar depth, but CFA exams have the trickier and more confusing questions. However, the CFA exam is considered a lot tougher overall because it covers a much wider range of material compared to the CPA exams. This not only means there’s more areas and studying to do, but makes the exams harder, as you have to remember a lot more material at the same time. CFA exam = 3 levels, CPA = 1 level Each CFA level also builds on the curriculum from the previous level, i.e. CFA Level 2 will require a strong knowledge of CFA Level 1, and CFA Level 3 will require a good understanding of CFA Levels 1 and 2. The CPA exams are comprised of 4 exams but arguably this is just 4 parts of one ‘level’. The material for each exam is exclusive to its own, which means you can study for each exam without having to remember material from another exam. You don’t even have to take them in order, and finish all 4 exams in 18 months. CFA exams likely involve learning more new material One important thing especially if you’re considering CFA from CPA, is that when you’re eligible to take the CPA exam, you’re likely to have learned 90% of the information needed to pass the exam already (from your current accounting job) – you just need to revise and review. The CFA exam is likely to involve a lot more learning of new material and concepts. CFA vs CPA: study time needed Because the breadth of material is not as wide, and candidates are likely to be reviewing concepts they learnt at work already, the CPA exam requires significantly less time to prepare for than the CFA exam. The AICPA recommends that candidates spend between 300-400 hours to study for the entire CPA exam. The CFA exam usually involves 300-400 hours of study per level, so 900-1,200 hours in total, assuming you pass all levels the first time! So the CFA exam requires approximately 3x more study time than the CPA exam. CFA vs CPA Difficulty Are the CFA exams harder than the CPA exams? Or are the CPA exams harder than the CFA exams? As we’ve established when looking at the differences between the CFA and CPA exams, the breadth, depth and length of the CFA exams combined make the CFA exams a lot more challenging to undertake and pass than the CPA exams. But how much more difficult? No one has quantified this in a comparable manner… until now. Reasons why CFA exams are harder than CPA exams Based on the points made above, here are the existing factors why the CFA exams are significantly harder than the CPA exams: - CFA exams requires typically 3x more study time than the CPA exams - CFA exam involves learning new concepts rather than reviewing material you already use at work - CFA exam material covers a lot more than the CPA exams - Every CFA exam level requires knowledge from the previous exam, whereas the CPA exams do not Does the pass rate data back this up? Here’s where it gets confusing for a lot of candidates – pass rates across different exams are not comparable as there can be huge differences in retake eligibility, exam formats and other circumstances. Why CPA exam pass rates cannot be compared to CFA exam pass rates CPA exams historically average about 50-60% for each exam. But the 4 CPA exams are not interdependent: you can pass them separately. This is different from the CFA exams where you need to pass each exam level before progressing to the next. You’ll find that usually the same well-prepared candidates are passing most of the 4 exams, and the less-well-prepared candidates are the ones in the failing bucket for most of the 4 exams. Candidates can retake any CPA exam within 24 hours. Additionally, with the CPA exam, failing candidates are allowed to apply to retake their exam 24 hours after receiving their results. You can now even retake this within the same exam window, meaning you can retake the exam pretty much immediately. CFA exams pass rates, on the other hand, are about 40-50% for each exam level. This is where the confusion of CPA vs CFA pass rates comes from. 40-50% CFA pass rate is only slightly less than 50-60% CPA pass rate, right? Wrong. You have to pass one CFA level to take the next. Every failed CFA Level 2 candidate has previously passed CFA Level 1. Similarly, every failed CFA Level 3 candidate has previously passed CFA Levels 2 AND 1. And even with the recent changes, it still takes a heck of a lot longer to retake a CFA exam (6 months) compared to a CPA exam (a few days). That why we’ve derived our 300Hours Pass Ratio metric. Our Pass Ratio allows candidates to reliably compare exam difficulty across different qualifications. 300Hours Pass Ratio shows CFA exams 2-3x more challenging than CPA exams The 300Hours Pass Ratio formula: You can deduce an ‘overall’ and more comparable Pass Ratio by dividing the average number people obtaining the full qualification by the number of new candidates: Calculating the CPA Pass Ratio: Using data from the AICPA Trends report across the most recently available 3 years’ data, this gives an average / overall / blended pass rate of about 62%: This Pass Ratio means that based on the last 3 years’ data, about 62% of CPA candidates go on to become CPAs. You can also calculate a pass ratio by dividing the average annual number of new CFA charterholders by the average annual number of CFA candidates. When you look at the data, you see that crucially only about 24% people that register for the CFA Program end up passing CFA Level 3: - Comparing 3H pass ratios: - CPA pass ratio: ~62% - CFA pass ratio: ~24% - If you compare both pass ratios, the conclusion is that a typical CPA candidate is 2-3 times more likely to go on to become a CPA than a typical CFA candidate is to become a CFA charterholder. - This implies that factoring in everything, and assuning you’re a ‘typical’ candidate for both qualifications, becoming a CFA charterholder is 2-3x more difficult than becoming a CPA. CFA vs CPA Salary Differences For salary data, we’ve referred to our own CFA Salary database, as well as data from PayScale. We restricted our analysis to the US market to maintain good comparability between CFA and CPA salaries. Here’s what we found. CFA Charterholder Salary Overview: by Job Title, Companies, Gender, Experience From our own databases and PayScales, we’ve deduced that the average CFA charterholder in the US is estimated to earn $95,023. This sweeping average includes all job types, industries and experience levels, so it isn’t particularly useful to a candidate with specific circumstances. But we can use this to compare with an equivalent number in the CPA industry. CPA Salary Overview: by Job Title, Companies, Gender, Experience Using the same methodology that we used to calculate the CFA average salary, we deduced that the average US CPA is estimated to earn $87,979. This implies that the average CPA earns 7.5% less than the average CFA charterholder. But is that really true? If you compare the experience levels of the CFA and CPA samples, you’ll see that in our analysis, the average CFA charterholder is more experienced than the average CPA. So you would expect the CFA charterholder to earn more, since they are likely to be in a more senior and higher-paid position. So who really gets paid more, CFA charterholders or CPAs? Given both factors, I’d say that it’s a tie: CFA charterholders and CPAs show similar salary levels on average. Which is Better, CFA or CPA? So which is better, CFA or CPA? There isn’t really an answer that will fit all situations – each person will have a different best fit for them. The important thing is to assess your desired career path carefully. Do you want to help prepare financial statements, or analyze financial statements for investment? If it’s the former, the CPA might be suitable, and if it’s the latter, you might want to check out the CFA charter. Hope the above helps when deciding between the 2 finance qualifications. What are you leaning towards currently? Leave a comment below! Meanwhile, here are some related articles that may be of interest:
https://300hours.com/cfa-vs-cpa/
This article is part one in a series evaluating the different admixture tools available from the main DNA testing companies – MyHeritage, 23andMe, Family Tree DNA, and Ancestry. What can we glean from ethnicity estimates from each company? We’ve all heard the warnings: take DNA admixture predictions with a huge lump of salt. But can those ethnicity estimates provided with an autosomal DNA test still yield some valuable clues? The tools available at testing company 23andMe include some unique features that provide broad context and may point us in a new direction. Accessing Your Ethnicity Estimate at 23andMe To access your admixture estimates at 23andMe, look under the Ancestry menu for “Ancestry Composition.” That link will take you to a display of your predicted biogeographical origins in a table of percentages, determined by comparing areas of your chromosomal DNA to reference populations. The groups in the table are color coded to a map, as seen in the example below. Note that the three estimates with the highest percentages also specify particular areas within their regions: Switzerland, Ireland, and Poland. This feature is termed “Recent Ancestor Locations” and is further explained on another page we will explore. Understanding Your Ancestry Timeline from 23andMe Below this table and map is a display entitled “Your Ancestry Timeline.” This plots your ancestors on a timeline to give you an idea of how many generations ago your most recent ancestor from a population may have lived. It refers to when you had a single relative who descended from a single population, or put another way, how far back you would need to go to find the most recent ancestor who was 100% of that heritage. Each of us inherits 50% of our DNA from each parent, about 25% from each grandparent, and about half again for each additional generation. If you have approximately 12% admixture from a unique or isolated ethnic region, it could indicate that you have a single great-grandparent from that ethnic region. Alternatively, you could have several more distant ancestors from different ancestral lines (perhaps two second great grandparents or four third great grandparents) who came from that region. If you rest your cursor on the population’s colored bar, a pop-up will tell you what range of grandparent you can expect this ancestor to be, as well as the span of years during which they were most likely born. You can then look in your tree for ancestors from the corresponding time frame to identify candidates from whom the ancestry may have been inherited. For more on how the timeline is determined, see the company’s white paper, “Ancestry Timeline.”[i] Continuing further down the page, you have the option of seeing what ancestry you inherited from which parent. This assumes you have the benefit of having tested one or both parents. You would first need to “connect” with your parent(s) through either the Share and Compare tool or the DNA Relatives tool. Once you have connected your results to those of one or both of your parents, 23andMe will separate your ethnicity admixture percentages into paternal and maternal categories. This process may lead to an update in your ethnicity percentage estimates. In the next section, you can view a circle graph of the biogeographical ancestry of other testers to whom you have connected. Keep in mind they may not be genetically related to you, such as a spouse whose test results you manage. Using the Chromosome Painting Tool at 23andMe Near the bottom of the page is one of the most useful features of 23andMe’s Ancestry Composition report, “Chromosome Painting.” Each of your 22 pairs of autosomal chromosomes is represented by two bars. In addition, if you are a female, the 23rd pair represents your two X chromosomes, whereas if you are a male, the 23rd pair has only one active X chromosome bar, since your other sex chromosome is a Y chromosome, not an X. The table to the left of the chromosomes is a reiteration of the table associated with the map above. The colors on the chromosome bars coordinate with the populations summarized in the table. By default, the confidence level of this representation is speculative at 50% (the minimum required to report an ethnicity). You can adjust the confidence level in increments of 10% up to as high as 90% (conservative). The higher the confidence level, the less detailed the breakdown, but the surer you can be about the predictions. Here is a portion of the chromosome painting which corresponds to the biogeographical estimates for the test we are reviewing. It is important to understand that, although you receive one chromosome from your mother and one from your father, scientists cannot tell which chromosome of the pair is from the maternal source and which is paternal. Likewise, they also cannot say that all of the chromosomes represented as the top of each pair are from one parent and all chromosomes at the bottom of each pair are from the other (whichever parent that may be). It cannot be determined with current science. Finally, it is important to note that even within a given chromosome pair illustration, we cannot assume that the entire ethnic admixture shown on one member of the pair came from a single parent. Representation of a maternal ethnicity might jump back and forth between the top and bottom chromosome of each pair, though connecting your results to those of a parent as described above might help to sort out which ethnic contributions come from which parent. A great feature of this tool is that you can rest your cursor on a particular population in the table and be shown only the corresponding areas in the chromosomes. This makes it easier to visualize, for example, what proportion of your DNA is associated with a particular heritage. More importantly, you can see ethnicities which are found on both members of a given pair and are, therefore, likely present in both of your parents’ heritage. Similarly, you can also see which ethnicities might have only one parent as their source. In the examples below, French and German appear to be strongly represented through both parents, while Eastern European may have originated from just one. How is My Ancestral Breakdown Calculated? At the very bottom of the page is a link to “View Scientific Details.” This page explains in general how your ancestral breakdown is calculated across 31 reference populations and also how your “Recent Ancestral Locations” are determined from over 120 countries and territories. The results across all 151 evaluated populations are summarized in a table with an indication of match strength. Here is where we see the sub-regions mentioned earlier in conjunction with the map’s table of percentages with Switzerland, Ireland, and Poland being the most significant for this tester. For more on how these Recent Ancestor Locations are determined, see the FAQ “Why did my Recent Ancestor Locations change?” and choose the link to Read More. From the Scientific details page, you can also download a spreadsheet of the start and stop locations and segment data for each reported ethnic region. These can be used in comparison to other chromosome mapping strategies to determine which ancestors might be the source of specific ethnicity results. Using Ethnicity Estimates to Aid in Genealogical Research Sometimes, even though you can’t tell from the results alone which ethnicities are maternal and which are paternal, you can still make some deductions about which populations do not originate on the same side. It’s similar to working a complex logic puzzle. Here is a very different example of a tester’s chromosome painting. What can we conclude from ethnicity estimates that might help in our genealogical research? It should be acknowledged that some of the estimates used here are trace levels and may be false positives, but the principles illustrated can still be applied to other results. First, comparing the Korean DNA with the European segment on chromosome 6 tells us these ethnicities are from different parents. Similarly, comparing the European segment on chromosome 1 above with the Manchurian and Mongolian segment there and comparing Manchurian and Mongolian segments on chromosome 14 with the Western Asian segment there reveals they are also from different parents. Examining the X chromosome pair below might lead us to conclude that its European segment comes from one parent while the South Asian comes from the other. But keep in mind, as we mentioned earlier, that representation of a maternal ethnicity might jump back and forth between the top and bottom chromosome of a pair. The only time we can be confident of distinct origins is if the segments are overlapping opposites. Two segments on opposite members cannot be from the same parent if they overlap and, at least in part, occupy the same position on the two members of a pair. Taking all the above analysis into consideration, if we make the speculative assumption that the tester’s European DNA comes in its entirety from one parent, and given that we also concluded that the Manchurian and Mongolian DNA and the Western Asian DNA are from different sides, these deductions result in the following profile for this tester’s heritage: |Side A||Side B| |Chinese||Chinese| |Korean| |Manchurian & Mongolian| |European| |Western Asian| |South Asian OR||South Asian| For more about the science behind 23andMe’s Ancestry Composition tools, see their publication, “Ancestry Composition: A Novel, Efficient Pipeline for Ancestry Deconvolution.”[ii] Although there are certainly significant limitations to the reliability of biogeographical predictions, the deductions we can make from Ancestry Composition can give us a working hypothesis to prove or disprove and can give direction to our research. Legacy Tree Genealogists has been at the forefront of genetic genealogy research services for over a decade. Our team of experts have solved hundreds of DNA-related cases, and can help you solve your DNA puzzles! Contact us today for a free quote.
https://www.legacytree.com/blog/ancestry-composition-tools-23andme
Learn to Write an Article and Master the Art of Essay Writing Writing a composition can be very tricky for many school pupils, since they have difficulty with the format. Many pupils understand the mistakes that are created and how to avoid making them. The procedure for learning how to compose an essay is not difficult, however it can take some time. Your purpose is to express your self. So as to do it, you will need to understand the different rules of grammar, punctuation, and sentence structure. If you can learn these areas of grammar, you will understand how to how to quote a song lyric compose a composition, because they are exactly what make a fantastic piece of writing. You should have the ability to use this information in the last paragraph to write concerning different facets of a topic. A person must know what a terrific sentence appears like before they could write an superb essay. As there aren’t any specific rules on the proper way to write an article, it’s all up to the student to get the correct rules for their situation. To be able to write an essay, the author should identify what kind of essay they’re attempting to write. It could be a report, a history paper, or a research paper. Each form of essay has its own instructions about the best way to compose a composition. Keep in mind, each style of essay requires different writing techniques. An effective article takes a specific set of methods and also possesses its own special style. Pupils that are writing for a class or essay will need to study the format of the course to have the ability to write an essay. Pupils will need to know the fundamentals of how to write an essay before they begin working on the job. When a student would like to compose an informative article, they have to come up with their own voice and have their very own fashion. To begin with, a student should understand the rules on the best way best to write an essay. The rules may also be summarized in four points. They’re agreement, coherence, investigation, and balance. After understanding these points, a pupil can develop their own style that can best convey their ideas and ideas. After a student has developed their own personality, they need to begin working on developing their design of writing a successful essay. A student should begin by writing a fantastic introduction into the article, and then go over the remainder of the material. If a pupil has an opportunity to reassess their writing at the margins of a laptop, they could immediately see whether college essay writing services they have anything they missed. Furthermore, this may be a good tool for keeping a pupil from committing the same mistakes . After composing a fantastic introduction, a student can start writing the essay itself. Once the introduction is finished, the student can move on to the composing part of the essay. All of the tips at the introduction along with other parts of the article has to be followed closely, since this will help to make the whole essay. This can be one of the most troublesome parts of the process of composing an essay. Students who find it challenging to compose an essay should read testimonials and take notes whenever they examine subjects for their essays. Making modifications to a topic can prevent the student from repeating the exact mistakes over. If a pupil has a chance to review the entire essay, they can make changes until they begin writing the last draft.
https://www.fixitfastautoglass.com/learn-to-write-an-article-and-master-the-art-of-essay-writing/
UN Women, in collaboration with UNAIDS, created this comprehensive online resource to provide up-to-date information on the gender equality dimensions of the HIV epidemic. The site aims to promote understanding, knowledge sharing, and action on the HIV epidemic as a gender equality and human rights issue. This web portal is where you can find cutting edge research, studies and surveys; training materials; multi-media advocacy tools; speeches and presentations; press releases and current news; best practices and personal stories; and campaign actions and opinion pieces by leading commentators. This site offers these and other up-to-date resources on the gender equality dimensions of the HIV epidemic. By providing access to a variety of knowledge products in one place, information can be retrieved quickly and easily, thereby reducing the amount of time spent locating key resources and materials on the Internet. Resources are organized by topic, type, and region and the entire site is fully searchable. Short summaries are provided for each of the resources to give visitors a quick overview of each of the materials. The web portal also offers additional features such as: an RSS feed to follow instant posts to our site, including breaking news, the latest resources and personal stories; a feed on the Homepage connected to the official UN Women Twitter Page; a multitude of page-sharing options through Facebook, Twitter, E-mail, and more. You can get in touch with us with questions/comments or provide feedback and suggestions for the improvement of the site through the Contact page. While HIV is a health issue, the epidemic is a gender issue. Statistics prove that both the spread of HIV and impact of AIDS are not random. HIV disproportionately affects women and adolescent girls who are socially, culturally, biologically, and economically more vulnerable. The figures are alarming: As of 2015, there are 17.8 million women (ages 15 and older) living with HIV. There are 2.3 million HIV positive young women (ages 15-24), comprising 60% of all young people living with HIV (UNAIDS 2015). Among adults, there are about 5,300 new HIV infections a day, of which 47% are among women. Of the new daily infections 35% of young people are infected, of which 15% are young men and 20% are young women (UNAIDS 2016). Our work helps to amplify the voices of women living with HIV, using strategies that promote their leadership and meaningful participation in all decisions and actions to respond to the epidemic. We seek avenues to integrate gender equality and women’s rights into strategies, policies, budgets, institutions and accountability frameworks. Some of our initiatives address the multiple intersections between HIV and violence against women. Others advance access to justice for women in the context of HIV, with a focus on critical property and inheritance rights. Our single most important strategy is empowering women and guaranteeing their rights so that they can protect themselves from infection, overcome stigma, and gain greater access to treatment, care and support (UN Women 2016).
https://genderandaids.unwomen.org/en/about
UN teams Around the World Power On for Women’s Rights and Gender Equality This week marks the opening of the 67th Commission on the Status of Women, held in New York, as well as International Women’s Day 2023, being commemorated around the world under the theme DigitALL: Innovation and technology for gender equality. All over, UN Country Teams, governments, civil society partners and women’s organizations are marking the day by celebrating gender equality, women’s rights and empowerment and raising their voices against discrimination and violence. Here is a glimpse into how UN teams are taking a stand this week: Jordan Research conducted by UNESCO in Jordan painted an encouraging picture overall on women’s participation in STEM fields. However, while it was revealed that women represented more than 60 per cent of students in natural sciences, medicine, dentistry and pharmacy, this number dropped to approximately 28 per cent- 45 per cent when it came to engineering and computer science. This year, the UN Resident Coordinator’s Office, along with other UN agencies, puts the spotlight on Jordanian women innovators who are using technology to tackle pertinent challenges of the day. From water security to artificial intelligence for recycling and maintenance of appliances, these everyday trailblazers are leading the way for other women. Read about their stories here. India In Mumbai, India, business executives, policymakers and UN Women leaders made an urgent call to accelerate investment in women leaders and entrepreneurs during the “Ring the Bell for Gender Equality” ceremony, held at the Bombay Stock Exchange (BSE) on 6th March, in commemoration of the International Women’s Day. The Initiative is jointly organized annually through global collaboration between UN Women and International Finance Corporation, Sustainable Stock Exchanges Initiative, UN Global Compact and the World Federation of Exchanges. The UN Resident Coordinator also launched a new programme called “FinEMPOWER” aimed at empowering women in financial security, specifically women entrepreneurs. Read about the programme and associated initiatives here. Kazakhstan The UN Resident Coordinator’s Office in Kazakhstan, the United Nations Development Programme and the United Nations Population Fund teamed up with the national postal operator 'Kazpost' to launch a campaign to combat gender stereotypes. From posters featuring champions to online quizzes to lucky draw prizes, the public awareness campaign is aimed at amplifying the strength, resilience and determination of women in under-represented sectors including information communications and more. Read more about the campaign here. Liberia In Liberia, women from all walks of life attended an official celebration on March 3rd, 2023 to mark the occasion. Along with a parade and award ceremony celebrating women leaders, the Vice President of Liberia, Chief Dr. Jewel Howard Taylor, moderated a panel discussion on 'Innovating with technology to promote gender equality'. The UN Women Country Representative, Comfort Lamptey, delivered the UN Secretary General’s message for International Women’s Day and highlighted the need to close the digital divide and increase the representation of women and girls in science and technology. The President of Liberia, H.E George Weah was also honoured and received a gift inscribed, ‘Feminist in Chief.’ Learn more about the event here. Afghanistan In Afghanistan, the UN Country team is carrying out the #InHerVoice campaign to ensure their voices and experiences aren’t erased from policy and public life. The campaign features testimonials by women from diverse backgrounds who tell how their daily lives and existence have been impacted by the latest decrees imposed by the Taliban de facto authorities. Read about the campaign here. Panama More than 20 Ambassadors, members of the diplomatic corps, civil society, business authorities and media joined the UN family in the province of Colón to commemorate International Women's Day and shed light on the Sustainable Development Goals at the local level, with a focus on gender equality. A panel discussion among women leaders including the Minister of the Canal of Panama, highlighted the role of women, the burden of care work, the inequalities in the lack of access to STEM education, as well as the gaps that persist for women in society Historically, Colón has contributed significantly to Panama's economic development. However, it is also a region where inequality is most present and visible as seen in issues of criminality, youth gangs and high poverty rates. Read more about the event here. Pacific Digital technologies provide new means to advocate for, defend, and exercise human rights and affect all women’s rights - civil and political, as well as cultural, economic and social rights. Recently, the UN launched a first-of-its-kind “Digital Economy Report: Pacific Edition 2022”. The findings revealed that “Digital technologies can reduce gender gaps in labour force participation by making work arrangements more flexible, connecting women to work, and generating new opportunities in online work, e-commerce and the sharing economy.” Highlighting these findings and urgently calling for more pro-active policies and laws, four UN Resident Coordinators- Jaap van Hierden, Richard Howard, Sanaka Samarasinha, & Simona Marinescu, in Micronesia, Papua New Guinea, Fiji & Samoa released a joint statement to mark the occasion of International Women's Day. Read the full statement here.
https://unsdg.un.org/latest/stories/un-teams-around-world-power-women%E2%80%99s-rights-and-gender-equality
National Leadership Consortium Partners The National Leadership Consortium has strong relationships with national disability organizations that have provided generous support over the years. Those organizations are: The American Academy of Developmental Medicine and Dentistry (AADMD) provides a forum for healthcare professionals who provide clinical care to people with neurodevelopmental disorders and intellectual disabilities (ND/ID). The mission of the AADMD is to improve the overall health of individuals with ND/ID through patient care, teaching, research and advocacy. The American Association on Intellectual and Developmental Disabilities The AAIDD is the oldest and largest interdisciplinary organization concerned with IDD. As a scholarly and professional society, AAIDD seeks to enhance the capacity of its members and others who work for and with people with IDD and to promote the development of a society that fully includes individuals with IDD. AAIDD publishes three peer-reviewed journals, books, and assessment tools; provides in-person and online education and professional development; and collaborates with other national organizations to achieve shared goals in public policy. The Alliance is an association of people committed to self-directed supports that offer opportunities for lives of meaning and impact. The Alliance advocates for the freedom and civil rights of people with disabilities and takes action against anything that limits opportunity and choice. The American Network of Community Options and Resources Foundation ANCOR is the national association for IDD community service providers. The ANCOR Foundation exists to build and honor the exceptional leaders that cultivate truly inclusive communities for people with intellectual and developmental disabilities. Association of People Supporting Employment First (APSE), Through advocacy and education, APSE advances employment and self-sufficiency for all people with disabilities. APSE is the only national membership organization focused exclusively on integrated employment. Rockville, MD The Arc of the United States (The Arc), the world's largest community based organization of and for people with intellectual and developmental disabilities; Washington, DC The Autistic Self Advocacy Network (ASAN) was created to serve as a national grassroots disability rights organization for the Autistic community run by and for Autistic Americans, advocating for systems change and ensuring that the voices of Autistic people are heard in policy debates and the halls of power; Washington, DC The Council on Quality and Leadership (CQL), the international leader in the definition, measurement, and improvement of quality of life for people with disabilities; Towson, MD The National Leadership Consortium is an affiliate organization of CQL | The Council on Quality and Leadership. Human Services Research Institute (HSRI), a nationally recognized research organization that provides national performance standards, technical assistance, and dissemination of best practices; Cambridge, MA The Learning Community for Person Centered Practices (TLC-PCP), an international community of people devoted to improving knowledge and practice in the provision of progressive, quality supports; Annapolis, MD National Association of Councils on Developmental Disabilities (NACDD) is a member-driven organization representing the Developmental Disabilities Councils in 55 States and Territories; Washington, DC NADD (NADD) is an international not-for-profit membership association established for professionals, care providers and families to promote understanding of and services for individuals who have developmental disabilities and mental health needs. The mission of NADD is to provide leadership in the expansion of knowledge, training, policy and advocacy for mental health practices that promote a quality life for individuals with dual diagnosis (IDD/MI) in their communities. National Alliance for Direct Support Professionals (NADSP), The National Alliance for Direct Support Professionals focuses on strengthening the direct support workforce, enhancing the status of Direct Support Professionals, providing access to high quality educational experiences, and strengthening the partnerships among direct support professionals, self-advocates and other consumer groups and families. The National Association of QIDPs NAQ is the "go to" organization for QIDPs - providing an avenue for connecting with other professionals, sharing evidence based best practices, and serving as a resource for learning and continued education. National Association of State Directors of Developmental Disabilities Services (NASDDDS), the national association of the directors of state departments of developmental disabilities; Alexandria, VA Research and Training Center on Community Living, Institute on Community Living at the University of Minnesota (RTC) provides research, evaluation, training, technical assistance and dissemination to support the aspirations of persons with developmental disabilities to live full, productive and integrated lives in their communities. Self Advocates Becoming Empowered (SABE), The United State’s national self-advocacy organization made up of a national board of regional representatives and members from every state in the US; Florence, SC TASH works to advance inclusive communities through advocacy, research, professional development, policy, and information and resources for parents, families and self-advocates; Washington, DC Together, these groups provide strong support for the work and impact of the National Leadership Consortium.
https://natleadership.org/partners.html
This is a very special keynote to me and I am grateful to the Trustees of ALT to invite me to speak at ALT’s 25th Annual Conference. This post shares the slides and some of my notes for the talk and you can also watch a recording from the conference here . Thanks to James Clay for this video sketch note of the talk. I wanted to introduce myself via the skin of my laptop, which has been tattooed, to borrow a phrase from Bryan Mathers, with my experiences as ALT’s CEO over the past six years. When I started to prepare for this keynote I thought a lot about how I could tell a story from my personal perspective, rather than in the voice of the organisation I lead. Because thanks to working at the heart of what ALT does, with Members from across all education sectors in all parts of the UK and beyond, I have the privilege of a very unique perspective, one that encompasses everything from global Learning Technology policy to a single teacher using a new gadget for the first time. I can’t cover all of that in less than an hour of course, but I do want to give you as much insight as I can into my perspective, what it’s like to be standing in my shoes, and so the photos in this talk are from journeys I’ve taken to work with Members from Oxford to Edinburgh, from Belfast and Galway to Cardiff and London. They paint a picture of the landscape that I work in, what the world looks like when you are standing in my shoes. I hope that this talk will help us to critically examine our perspective – and in particular why our gaze is always drawn to what we are promised is just around the cover, just over the horizon. In her analysis of the most recent Horizon Report, Audrey Watters updated her project to track the predictions that the report has made over the years, examining whether what advocates promise actually comes to pass. Audrey writes: ‘Your takeaway, now and then and always: do not worry about what this report says is “on the horizon.” I bet you in five, ten, twenty years time, folks will still be predicting that it’s all almost here.’ What does it mean for us if we are locked into a perpetual cycle of not arriving, of advocacy for tech that does not deliver to its full potential? We can go back through the history of Learning Technology and come across solutions promising to ‘solve problems’ from cutting costs or reducing teacher workloads to improving learning outcomes or increasing student satisfaction. But have these solutions really delivered for all learners? Does the way we think about and make policy for Learning Technology work? Or does this approach when viewed on a global scale, place the UK firmly in a policy context that the Finnish education expert Pasi Sahlberg describes as market-led privatisation, text-based accountability, de-professionalisation, standardisation and competition resulting in, in his view, unsuccessful education policies? I think so. Advocating for what’s just beyond the horizon causes 3 issues: first, it gives us the sense that technological innovation is the driver behind change, the only solution to solving the problems that we face. The dominant narrative here is that we are at the mercy of inevitable innovation, the endless march of the machines and that we need to keep running in order to keep pace with progress. This in turn highlights the second problem: a perspective informed by advocacy focused only on what’s ahead increases our perception that we need to compete harder in order to achieve constantly moving goal posts. Compete with other countries as we move up or down league tables, with other institutions, with each other. Instead of making the most of sharing what we have, we don’t like to adopt something that’s ‘not made here’, we re-invent, re-design and re-solve problems and create content over and over again in a race to be the first, the best, the most successful. The issue is that this perspective of continual advocacy tends to ignore the history, the research, the evidence that we do have (and we have decades worth of it by now!). Does being focused on and advocating for what’s always just beyond the horizon also absolve us from ethical responsibility? We’re always talking about the future not what’s happening now? I argue that we have the history, the evidence, the research to shape a different perspective, to walk a different path in the future of Learning Technology and there are an increasing number of voices that articulate how things are changing, who are shifting the discourse to a more critical ground. Martin Weller’s inspiring series on ‘25 years of Ed Tech’ is a great example of this (and definitely worth reading if you haven’t come across it yet). He emphasises the need for taking a critical approach to our thinking in Learning Technology, to examine the (commercial) interest that influence its development, ‘for example, while learning analytics have gained a good deal of positive coverage regarding their ability to aid learners and educators, others have questioned their role in learner agency and monitoring and their ethics.’ Another important influence on my thinking and our wider discourse work examine the role of gender and equality in Learning Technology, led by man inspiring role models including Maha Bali, Frances Bell, Anne-Marie Scott, Bon Stewart, Josie Fraser, Donna Lanclos, Melissa Highton, Clare Thomson, Helen Beetham, Lorna Campbell, Sheila MacNeill, Laura Czerniewicz, my fellow keynote speakers this year Tressie MacMillan-Cottom and Amber Thomas, and many others who I am sorry not to mention by name. In her reflective post ahead of the conference, Catherine Cronin reminds us that often ‘long-standing work in critical and feminist pedagogy, for example, was not often acknowledged in later work about MOOC/online/open teaching and pedagogy. Acknowledgement and analysis of earlier work is vitally important in education’. With ever growing challenges facing us, and decades of research and practice to inform our thinking, it seems clear that (Ed) Tech won’t ‘save us’. It won’t save us because it shouldn’t be the driving force behind what we do. Instead, we have to move beyond advocacy for tech that is the answer to all our problems. Towards empowered, critical practice that enables us to negotiate and articulate our relationship with technology and how we use it for learning and teaching. This isn’t to say that technology doesn’t have significant potential and I don’t meant to dismiss the role that industry plays or how much technological innovation contributes to the way we learn, teach and work. Learning Technology can bring big benefits for learners and educators – but it needs to be an empowered relationship instead us being threatened to be buried under an avalanche. So my questions are: How do we move beyond advocacy? How to we realise the potential of our professional practice for the benefit of learners and for the greater good? How to we move to using Learning Technology to meet some of the biggest challenges we are facing globally right now? These are big questions. I’d like to share some examples from my own recent work as a starting point to answering these questions. Putting Learning Technologists firmly at the heart of that effort, I’m going to start by looking at how professional practice has changed, using the example of ALT’s accreditation scheme, CMALT. Building on the work Shirley Evans, Trustee of ALT, and my colleague Tom Palmer have done in the past two years to collate information from hundreds of portfolios submitted for accreditation since 2004, I’ve started examining if and how the evolution of Learning Technology as a profession can be charted by what specialisms individuals have chosen to demonstrate their practice with. Since 2004 over 100 different areas of specialist practice have been defined and starting to group these into different categories quickly became difficult as they had to be so general as to become meaningless instead of insightful. That in itself is interesting, because it emphasises how diverse the profession is and continues to be. It brings us back to ALT’s definition of Learning Technology ‘as the broad range of communication, information and related technologies that can be used to support learning, teaching and assessment. Our community is made up of people who are actively involved in understanding, managing, researching, supporting or enabling learning with the use of Learning Technology.’ and reminds me of the original maxim that still holds true: you don’t have to be called a ‘Learning Technologist’ to be one. To me, it aptly reflects the reality of how differently we as individuals and within organisations approach the challenge of making effective use of Learning Technology and I feel that great strength lies in embracing and respecting this as a hallmark of our profession instead of trying to exclude or ignore words or people who don’t fit within a more narrow definition. Even if we don’t speak the same language or use the same terms to describe our work, the growing body of CMALT portfolios is a powerful example of what we do share. Instead then of focusing on the bigger picture, my work has focused on drilling down into the detail of how specialist areas have developed and this first example shows specialisms related to engaging learners. It is interesting to see how even the titles chosen reflect a changing relationship to working with learners, from a more distant research or evaluation approach to focusing on support and feedback and then to collaboration and engagement. Another interesting question to ask is when particular kinds of work became important or developed enough to constitute specialist areas of practice and this slide shows examples of ‘firsts’, i.e. when particular kinds of practice were first submitted for accreditation as specialist areas over the past ten years. It only took a year after 2012’s ‘Year of the MOOC’ for instance for it to appear on this list for example. Meanwhile, more recent examples of new specialisms include digital well being, student collaboration, analytics, gamification and leadership. More and more CMALT Holders have started to share their portfolios via ALT’s CMALT Portfolio Register, opening up their practice and at the same time contributing to our ability to gain a better understanding of how professional practice is developing and changing. One question we should ask is to what extent the kind of best practice usually included in portfolios submitted for accreditation, in particular if they are subsequently shared more widely, reflect the reality of professional practice? What isn’t included in this picture? What is left out? Most of the time, anything that’s gone wrong: all the times when a pilot didn’t lead to full scale implementation, when a new gadget ends up gathering dust in the back of a cupboard, when colleagues didn’t co-operate, students gave negative feedback or leadership failed. Learning Technology is a risky business and sharing what didn’t work is still not widespread. But there is something besides failures that isn’t reflected in this picture and that is all the work that is hard to put into words. Hours spent building someone’s confidence or overcoming their resistance to change. Days devoted to influencing decision makers to make the right choices when it comes to strategy or procurement. Teams who translate between faculties or directorates in order to arrive at a common consensus for the new VLE. I can think of many examples of what specialisms I’d like to see appear on the list – and I am sure you can, too. One of the trends however we can follow over the past ten years or so is the gradual increase in the number of people who choose a management or leadership related specialism as more and more Learning Technology professionals move into more senior roles. Their expertise in Learning Technology becomes more important as technology becomes more complex and our demands of what it can achieve for students or staff on a large scale become more ambitious. In consultation with Members of ALT this has informed the development of new accreditation pathways over the past 18 months, and the second pilot of both Associate CMALT (a new pathway for early career professionals or those for whom Learning Technology is a smaller part of their role) and Senior CMALT, for senior professionals whose work involves management, leadership, research or similar advanced areas of practice, are about to be concluded. These new pathways mark the first expansion of the CMALT framework since 2004 and I want to share some early findings from the pilot groups to date. The requirements for Senior CMALT include two (instead of one for the existing CMALT pathway) specialist areas of practice to be described, evidenced and reflected on. The subjects chosen to date reflect a broad range of practice from scholarship and Open Access publishing, to assessment, online courses and mobile learning to staff development, training and leadership. Similar to the earlier chart which showed a diverse range of different specialisms over a time these choices reflect how more senior roles in Learning Technology are developing their focus. A new requirement added to Senior CMALT is an Advanced Area of practice, which needs to be specifically related to the four CMALT Core Principles. The visual thought here shows the result of consultation with and discussion amongst Members who came together to re-articulate these principles afresh as part of the work to develop new pathways to CMALT. This is particularly relevant to the earlier question of how far the practice evidenced for CMALT reflects the reality of our professional everyday as to me these shared principles are a strong example of how we articulate what may be less straight forward to share about the work we do. To me these principles reflect professional practice beyond advocacy. Now we can see the earliest topics chosen by participants of the pilot groups for Senior CMALT and what areas of their practice they have chosen to as Advanced Areas relating to the core principles and these range from research focused topics, such as research in postgraduate distance learning or blended professional development to leadership of cpd programmes and leadership in the development of research and practice communities. At this stage the insight we can gain from this is still limited by the necessarily small numbers of professionals involved. But it does give us a glimpse of what critical approaches to professional practice in Learning Technology may develop and this will become more interesting as this pathways is fully established and the number of examples we have increases. I gained CMALT two years ago and I found the process very rewarding. It was valuable to step away from the perspective of having managerial oversight and put it to the test as a professional, becoming a candidate myself and seeing the other side of the process (you can access my portfolio here and note that my portfolio was assessed by Trustees of ALT to manage the conflict of interest). So when the opportunity came up to put one of the new pathways through its paces, I opted for Senior CMALT and set to work expanding my portfolio. It prompted me to reflect on how I have moved my own practice towards a more critical perspective. As my Advanced Area of practice I chose promoting equality in Learning Technology and I soon realised that this was harder to translate into a portfolio of evidence than I had imagined. It’s my own “CMALT Fantasy Specialism” and I am fortunate to have had some very helpful critical friends who provided input to ensure that it didn’t turn into a nightmare. So to unpack what this part of my work is about it’s important to explain the context in which my understanding of equality is grounded and to do this I want to share an extract from what I wrote in my portfolio: Whilst my position is indeed one of relative privilege, it is nonetheless an experience of inequality. As a space in which we work, Learning Technology sits at the intersection of the tech industry, education, politics and the third sector. When I started working in Learning Technology I had no concept of how much inequality there is and how much it would affect every single day of my professional practice and that of every colleague, every learner. Particularly as a Learning Technologist in a leadership position it can be sobering to see the kind of structural inequality Laura Czerniewicz (who stood in this stage 3 years ago and inspired us with her talk on Inequality in Higher Education) and others speak of on a national or global scale. But whilst the bigger picture is important to my work, examples of inequality I have experiences can be found far closer to home, in the day to day working life many colleagues can relate to, such as being the token woman on a ‘manel’ to seeing reports about empowerment illustrated exclusively by white women in high heels to being the only women on a table of policy makers representing “the sector” to having to be introduced by male colleagues as ‘the boss’ in order not to be mistaken for their PA, from not being allowed to ask questions at events to not being invited, not being funded, not being considered for an opportunity. The list of examples goes on and on and for me it’s difficult to describe dispassionately. The need to promote equality in Learning Technology goes far beyond the personal (and as I have acknowledged in my case a personal position of privilege). Inequality is structural and political and frequently apparent in the development of Learning Technology, such as algorithmic bias shaping the way new technologies operate. I admire writers and researchers who analyse, chart or expose inequality and I actively use my position to take action to promote equality. I have specifically chosen to attempt to develop this area of my practice in my portfolio because that in itself can contribute and I have selected three examples of how I promote equality as a Learning Technologist. … One of the examples of practice included in my portfolio is volunteering to support the FemEdTech initiative and at this point I’d like to give a big wave to everyone involved in #femedtech who help us foster more criticality in Learning Technology by helping us create a more diverse, a more inclusive perspective and community. And this isn’t an effort that is relevant only to women or people of colour or any other other group that fights for equality and against discrimination. Although it may seem like an obvious point to make, equality is for everyone. It concerns all of us. Grass roots projects like UnCommon Women demonstrate that one of the key ways in which we can achieve greater criticality is greater collaboration, knowledge exchange and openness. Our practice is political, it’s personal and active participation in any of these initiatives makes a difference. It helps us articulate a narrative that isn’t dominated by advocacy alone and expands our personal learning networks beyond those we already know and feel comfortable with, help burst the filter bubbles that surround us. For my own work, focusing on open collaborations is intensely practical and an efficient way to making things happen. I leverage this approach in my work for ALT for example for providing input to policy makers such as the call to action for policy makers collaboratively developed and published at the start of this year. Or working with start-ups and academics to bring together a guide for how to work together. Or developing ALT’s own approach to operating as a virtual organisation, a project in open leadership that I work on with Martin Hawksey. Collaboration and inclusivity help foster criticality, inform my thinking through the different perspectives I encounter and inform strategy. I follow in the footsteps of others (including the outstanding teams and individuals who were amongst the winners of the Learning Technologist of the Year Award announced yesterday) who have leveraged their open practice to make change and spark more critical professional practice. Criticality helps ensure that we do not leave answering the big questions, facing the big issues up to others without making our voice heard. Criticality and collaboration are at the heart of professional practice that enables us to work in partnership with industry, to inform how products and services are developed and to influence policy that effectively governs our relationship with technology and the tech industry. We do have the power to shape our future and we do have a vision of what that future should look like. To close, I’d like to focus on that future. Who shapes the future of Learning Technology? That is what we asked participants in the LTHE chat in June, when we discussed developing critical and open approaches in Learning Technology. We asked participants as the final questions of the chat to share a hope for the future of Learning Technology. Their vision is for Learning Technology to be ‘inclusive. Not a bolt on, not an alternative, lesser experience’, that ‘all education is open’, that we will combine ‘innovation and integration’, that there will be ‘greater sharing of results, greater scrutiny of results and greater understanding of the process followed to produce the results’, they highlighted the ‘need to raise the lowest level of engagement with technology/pedagogy as well as supporting those on the cutting edge’ and they hoped that ‘a symbiotic and ultimately synergetic relationship with pedagogy is established which facilitates a revolution in society’s objectives for our education system’. These are their voices, their hopes, their vision (and you can explore the conversation with TAGSExplorer). So, when we ask who shapes the future of Learning Technology – my hope is that we don’t leave it up to others. My hope is that we continue to participate in the conversation, that we make our voices heard and listen to others. When I first stood in this theatre in 2009 I saw great potential in what could be achieved by this community and I wanted to contribute to it. Nearly 10 years later I have seen parts of that vision come true, but there are much bigger things still to come. And that is up to all of us. So I invite you to share your hopes, your vision and make your own voice heard: Last, but not least, I’d like to thank the Trustees of ALT who have given me the opportunity to speak here today and to thank you for listen (reading).
https://marendeepwell.com/?p=1669
PTA advocacy- it's at the foundation of everything we do. We are a trusted local community voice. We are nonpartisan, nonsectarian and noncommercial. Our work not only touches a wide range of issues related to children, public education and families, but we employ numerous avenues to advocate. This includes elevating the advocacy that families do every day as well as the informal and formal advocacy methods we use organizationally. Together, the voice of families and the FCCPTA voice is loud and strong in Fairfax County. OPPORTUNITES TO ELEVATE LOCAL FAMILY VOICES: > Sharing key resources to inform and connect. > Providing know -how and expertise on steps. families can take to advocate at all levels. > Offer trainings, webinars, and events to inform and education. > Promote opportunities to engage in town halls, meetings, public hearings and School Board meetings via public testimony. > Sponsoring advocacy trainings, webinars and programs. > Opportunities to participate in surveys > Invitation to share comments, questions and concerns to [email protected]. OPPORTUNITIES FOR FCCPTA ADVOCACY: > Public Testimony by FCCPTA leadership > Policy Letters > Public Statements & Letters > Resolutions > Legislative Priorities > Coalitions and partnerships > Appointments to task forces & special committees > Invitations to serve on FCPS interview panels > VAPTA advocacy programs and initiatives > National PTA advocacy programs and initiatives FCCPTA advocates every single day for issues affecting public education, and the well being and success of children and families. Learn about our priorities for the 2021-22 school year here. ADVOCACY ACCOMPLISHMENTS > Our reputation as the voice of local families in education paired with our expertise in educational issues affords us results in issues of all types and sizes. See examples of just some of ways we shape and change policies, programs, procedures and the overall school climate and culture. As a membership organization, we adopt positions based on our mission, research, and/or previous positions of Virginia or National PTA. Learn more about how position statements and resolutions operate in our association, and find National, State and local through drafting, creating and voting on position statements and resolutions. Where do you find out who your school board member is? Whom do you contact when you have an issue? We've put together information for PTA members and prospective members so that everyone can be empowered to advocate for their family, and for every child. FCCPTA CITIZEN ADVISORY WORK > FCCPTA is invited to hold a seat on several FCPS School Board Citizen Advisory Committees. This significant work is one of the main drivers of our advocacy. It gives us a seat at the table where are a part of crucial conversations and decisions. Learn more about these groups and what they do, and how you can give input to be part of the conversation. TAKE ACTION > Empowering families to advocate every day is a priority for FCCPTA. We create awareness and educate members and the entire community so they can be effective advocates. We also invite input and engagement from members and the community to inform and guide our work. Learn how you can take action- in your own school and how to be part of our larger advocacy as the FCCPTA community.
https://www.fccpta.org/advocacy
In collaboration with SOCODEVI, the WVL-Musoya project has been developed four years ago and now intervenes in five different regions of Mali. Intending to work towards women’s equality and rights, this project gave place to an advocacy campaign in Koulikoro’s region. The WVL-Musoya project aims to support organizations and networks of women’s rights to improve the services that are offered to them. The project reaches several regions in Mali and involves many regional partners. It allows services offered to women and girls to be optimized in a spirit of cooperation with these organizations. Their collaboration helps create spaces to exchange safely, which strengthens the skills, capacities, and knowledge of Malian women’s networks about law and policies. Supporting local and regional actions help raise the population's awareness about gender equality issues. This dynamic promotes positive and sustainable changes to be introduced in Mali. With the financial support of the WVL-Musoya project, many initiatives have been created, such as an advocacy and awareness campaign against female genital mutilation and forced marriage of girls in Mali. This campaign is part of the project and aims to inform and raise population awareness about gender- based discrimination. Female genital mutilations are part of governmental initiatives, but they are lacking tangible results. With the support and participation of the WVL-Musoya project’s stakeholders, this campaign helped expand knowledge about genital mutilation among local communities. By facilitating access to medical and legal knowledge on female genital mutilation, this campaign allows women and girls to better protect themselves against these discriminations. This collaboration has empowered Malian women to be better equipped about the real consequences of these practices, while also being informed that legal documents existed to frame the ban on female genital mutilation. This new knowledge is a real asset for Malian women and girls, as they can now better protectthemselves.
https://ceci.ca/en/news-events/a-successful-advocacy-campaign-for-the-wvl-musoya-project
Volunteer Family AdvocatesAre you interested in volunteering with Doras Luimní as a Volunteer Family Advocate? We are currently accepting applications from interested individuals in Limerick who have experience working in the community, supporting people from a migrant background to access services and information. Volunteers ... War-Torn Children exhibitionThe War-Torn Children exhibition launches in Limerick on Monday 17th July at 7pm in the CB1 Gallery. The aim of the exhibition is to raise awareness of the human impact of war and injustice, and to promote a culture of hospitality ... Vacancy - Translator, Interpreter & Cross-Cultural WorkerDoras Luimní is currently recruiting for the position of Translator, Interpreter & Cross-Cultural Worker. The overall objective of the post is to provide interpretation, translation and cross-cultural support for recently arrived refugees. It will involve assisting staff, volunteers and relevant services ... Reporting RacismDoras Luimní is a member of the European Network Against Racism (ENAR) Ireland. As part of this network, we assist people who have witnessed or experienced racism to report the incident using this online third party mechanism: www.ireport.ie If you have witnessed ... Advice & Information CentreWe provide information and advice on immigration related issues. We provide a free professional service assisting migrants in accessing their rights and entitlements. Some of the most common issues we deal with are: Leave to Remain and Subsidiary Protection Applications Family Reunification ... Doras Luimní is an independent, non-profit, non-governmental organisation working to support and promote the rights of all migrants living in Limerick and the wider Mid-West region. We work to change the lives of migrants, to change legislation and to change society. Our vision for Ireland is a society where equality and respect for the human rights of migrants are social norms. Our mission is to promote and uphold the human rights and well-being of migrants through personal advocacy, integration development and collaborative advocacy campaigns at the local and national level. Doras Luimní is registered as a company and has charitable status.
http://dorasluimni.org/?option=com_content&view=article&id=280&Itemid=100