content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Letter from the Editor: Self-love
Summer is on its way. Here in Medical News Today‘s hometown of Brighton, United Kingdom, the temperature is rising, seagulls are preparing to steal the food of unsuspecting tourists, and our fingers are hovering over desk fans in anticipation of the warmer weather.
Share on PinterestThe MNT team used their creativity to decorate some pebbles with positive messages.
It’s at this time of year that I start to get excited about all the things to come: barbecues, beach days, light, warm evenings, and the prospect of leaving the house without a jacket and scarf.
It’s at this time of year, however, that my body confidence begins to decline. The thought of wearing a swimsuit at the beach or even revealing my legs in a dress or shorts fills me with fear.
I’m certainly not alone here. According to the UK’s Mental Health Foundation, 1 in 5 adults in the country have felt shame within the last year as a result of their body image, and more than a third have felt anxious or depressed because of concerns over body image.
Self-love is vital for our mental health and well-being, and we could all do with a little more of it. With this in mind, we decided to share some messages of self-love in honor of Mental Health Month this May — in the form of rock painting.
“You are important.” “You rock!” “Love yourself. You are worth it.”
These are just some of the positive messages that the MNT team thoughtfully painted on pebbles and placed around various locations in Brighton for people to discover. It’s our hope that finding one of these pebbles will give someone a little boost of confidence.
For those of you who are curious as to why so many studies are conducted in mice and how they can possibly be relevant to human health, I urge you to read our article on this very subject. It may very well change your outlook on medical research.
I’ll be back next month with the latest news on what’s been happening at MNT HQ. | |
I was contacted recently by Tom Hanson about the launch of Open Education, a new blog focused on—you guessed it—open education. As frequent readers of Mission to Learn know, I mention open education here with some regularity and I am more than happy to announce any new blog or other site that continues to boost awareness and adds to the thinking on the topic.
Additionally, I found the communication from Tom serendipitous, as I feel there is a natural bridge between open education and the concept of the digital curator that has featured strongly in my recent posts. One of the significant challenges that open education faces is for it not simply to turn into a proliferation independent content initiatives that results in a lot of interesting but highly disjointed and disorganized bits. Effective curatorial approaches are essential to leveraging the value of open educational content.
To a large extent, the role of digital curators is tied to the existence of a viable and sustainable digital commons, a “place” where it is possible for anyone to contribute content and benefit from the content of others. The opportunities and challenges of this commons are the focus of the most recent three postings on Open Education, beginning with The Digital Commons – Left Unregulated, Are We Destined for Tragedy?
Readers with a bit of background in economics or other social sciences may recognize the reference to Garrett Hardin’s classic essay “The Tragedy of the Commons” in this title. For those not familiar with the essay (one I would recommend as foundational reading for developing an understanding of “commons” issues in general, and our current environmental challenges in particular), the gist of it is that when a finite public resource is exposed to unregulated public demand, the resource is bound to be destroyed. There is, as Hardin puts it, no “technical” solution to this sort of problem. It can only be solved by what he calls “mutual coercion mutually agreed upon”—in plain English, the sort of regulation represented by the American tax system, which we all grumble about at times and yet all recognize (well, most of us) as better than a voluntary system based on the conscience of individual tax payers.
The Open Education post juxtaposes Hardin’s original essay with a more recent article, also titled The Tragedy of the Commons, by Nobel prize-winning economist Daniel McFadden. McFadden, writing in 2001, argues that
The problem with digital information is the mirror image of the original grazing commons: Information is costly to generate and organize, but its value to individual consumers is too dispersed and small to establish an effective market. The information that is provided is inadequately catalogued and organized. Furthermore, the Internet tends to fill with low-value information: The products that have high commercial value are marketed through revenue-producing channels, and the Internet becomes inundated with products that cannot command these values. Self-published books and music are cases in point.
These comments, and parts of the rest of the article seem almost quaint in the aftermath of the rise of Web 2.0, the long tail, and Google, and what seems to be the imminent demise of the recording industry as we have historically known it. Still, the threat that McFadden points to—that ultimately commercial interests will gain the upper hand in management of the digital commons—has hardly been eradicated.
Open Education’s viewpoint, with much reference to Lawrence Lessig, is that the digital world simply does not mirror the physical world where Hardin’s thinking applies and that the “mutual coercion mutually agreed upon” solution that McFadden seems to feel is inevitable, is in fact unnecessary and highly undesirable. The implication is that there is, in fact, a technical solution.
It is easy to agree in spirit, but as one of the comments on the posting suggests, even given that the Internet represents a virtual and seemingly infinite space, it is still constrained by the physical hardware needed to create and sustain it, and perhaps more importantly, the energy needed to run that hardware. That, and the current need for physical cabling or satellites to transmit data bring the Internet very much back into the world of finite resources. Whether there is a technical solution to these issues remains to be seen. In the meantime, the large telecommunications providers and others with vested commercial interests, will certainly keep pushing hard to assert more control over the commons represented by the Internet.
Readers, content contributors, and would-be digital curators across the Web certainly have their own vested interest, whether they are conscious of it or not, in understanding the digital commons and the potential threats to it. The Open Education series is one good place to start learning more about the issues involved. I look forward to seeing how this new blog evolves. | https://www.missiontolearn.com/the-tragedy-of-open-education-a-look-at-a-new-blog/ |
Responsibilities:
Engage with IT department, stakeholders, management, marketing and other teams in identifying and analyzing opportunities to implement business requirements.
Gather and analyze the high level (epic) tasks and requirements and decompose into individual tasks, subtasks, and human actions.
Create and execute project work plans and revise as appropriate to meet changing needs and requirements.
Identify resources needed and assign tasks to in-house developers/QA and project team members.
Effectively track and report progress, provide regular updates to upper management.
Support effective communication between the project, QA teams and the Support Desk on user issues
Continuously explore customer`s experience, internal project-related business processes and provide processes optimization and improvement.
Required skills for this position are the following:
2‐3 years of business analysis experience.
2‐3 years of project management experience.
Basic knowledge of product management and technical background.
In-depth understanding of the software development life cycle.
Practical experience with agile methodology.
Experience in running projects or software in live.
Experience in drawing up a project plan (roadmap), timing, project reporting, writing project documentation, requirements decomposition.
Good knowledge of Jira entities.
Ability to translate conceptual requirements into business and functional requirements.
Desired Skills:
Running Fintech or banking software or project in live from the scratch.
Building software prototypes in the scope of user flow and UI screens.
Our ideal candidate has the following experience:
1. Active participation in the development of web-products of the company: prototyping, design, development, management, support.
2. Controls of the project development processes throughout the cycle.
3. Analysis of business requirements.
4. Optimization the company's product development processes, ensuring the scalability of the processes.
5. Improvement of the existing company products.
6. Communication with international and European payment providers in order to reach agreements on the connection of their payment instruments, and also accompanying the full cycle of connection methods describing the processes of each of the methods. | https://www.talmix.com/projects/002a3442-ed7b-4423-9208-31947348ffef/project-product-manager |
Yet there is evidence that the EU itself has launched its own assault on equality for some time, not least through its ruthless austerity programmes.
TUC general secretary Frances O’Grady claimed that only EU membership can defend gains in the workplace, ranging from protection against pregnancy discrimination to fairer pay, holiday and pensions.
However, Labour MP Kate Hoey immediately pointed out that women would continue to be protected by British laws won by trade unions if this country left the EU.
“All the benefits for women on equal pay and equal rights have been won by the hard work and campaigning of trade unionists and campaigners for equality.
“Maybe the TUC should speak to Greek women workers and see how the EU has treated them before producing such a biased report,” she remarked.
More about that later. Claims by those supporting EU membership that women’s rights have been handed down by a benevolent EU have gone largely unchallenged for many years.
The truth, of course, is very different. The fight here for equal wages dates back hundreds of years, including the fact that the TUC passed a unanimous vote in support of equal pay in 1888.
The Labour Party included a Charter of Rights for all employees in its 1964 manifesto, including the right to equal pay for equal work and the Harold Wilson government introduced Barbara Castle’s Equal Pay Act in 1970.
This Act was the result of mounting pressure from British workers, including strike action by the Ford women sewing machinists at Dagenham in 1968 and vigorous campaigning by the National Joint Action Campaign for Women’s Equal rights culminating in a massive demonstration in 1969.
Pro-EUers ignore these developments and point to the fact that the Treaty of Rome which established EEC law in 1957 set out the principle of equal pay for work of equal value in Article 119 — now Article 141 of the Treaty of Amsterdam.
However, highly respected discrimination law expert Richard Townsend Smith pointed out in 1989 that, far from being an example of the progressive nature of the EEC , it was included largely as a concession to the French “who already had equal pay legislation and feared that they would be at a comparative disadvantage.”
So we can thank French workers and their struggles for any equal pay laws not EU institutions.
This also explains why the European Commission took out infringement proceedings against Britain in the Court of Justice in 1982 under the provisions of Article 119 of the Treaty that widened the scope of equal pay to cover work of equal value.
It is at this point that Europhiles in the labour movement began to argue that any improvements in workers’ rights can only be won at a European level and abandoned the idea of using national structures to democratically change British law.
They also point to the fact that the EU has passed considerable equality legislation as part of the Social Chapter, including that on maternity protection, parental leave rights, part time work, working time, workers with family responsibilities and child care.
Yet much British social equality legislation predates the EU, for example the Race Relations Act 1965 and 1968, the Chronically Sick and Disabled Act 1970 and the Sex Discrimination Act 1975.
Legislation on these issues had long been fought for by workers and their organisations.
Moreover, the EU still has a significant gender pay gap in practice 60 years after the adoption of the Treaty of Rome.
In fact equality legislation varies wildly across all member countries.
What Europhiles don’t want you to know is that, at 52 weeks, Britain has about the best maternity leave in Europe — the statutory minimum under EU law is 14 weeks.
Britain is not the best for maternity pay which is paid on a variable scale with only the first six weeks paid at 90 per cent of salary.
A new Maternity Leave Directive first proposed in 2008, raising the minimum leave to 20 weeks, was quietly withdrawn by the European Commission on July 1 2015.
EuroActiv.com says the commission, under the pretext of simplifying EU law under the 2014 RE FIT exercise, wanted to kill the draft and it was being seen as an attempt to dismantle women’s rights and gender equality in the EU institutions.
According to the European Women’s Lobby officer Mary Collins, “Rising conservative and religious forces and far-right political actors are impacting negatively on women’s rights and are calling into question the very notion of rights — especially sexual and reproductive rights — that were hard fought for by previous generations of women and men.”
She went on to say that the economic crisis and austerity measures have been “used as an excuse” to dismantle gender equality across all member states citing Slovenia where women used to enjoy 100 per cent of salary while on maternity leave, which has been reduced “by 90 per cent or maybe more” over the last years.
According to the EuroActiv report, maternity proposals are the victim of the commission’s “better regulation” axe-man Frans Timmermans.
Malin Bjork, the Swedish MEP said: “this threat to get rid of the Maternity Leave Directive is serious because it contradicts the European Union’s socalled commitment to gender equality and effective work-life balance for women and men in Europe.
“It will also create a dangerous precedent for the “better regulation” agenda (RE FIT) which is sacrificing social standards in the name of administrative burdens,” she said.
The idea that an EU that has imposed austerity on millions, particularly in countries like Greece, and is enforcing mass privatisation and TTIP is somehow ideologically wedded to equality of any kind over the interests of corporate capital is absurd. n Brian Denny is spokesman for Trade Unionists Against the EU. | https://www.workersofengland.co.uk/w-e-u-news/the-eu-is-far-from-being-a-guarantor-of-workers-rights/ |
On October 12th, the Middleton Public Library held a community panel discussion of Bryan Stevenson’s Just Mercy. This event sparked much discussion about social injustices and issues of racism in Dane County. Five Middleton community leaders have organized a community-wide follow-up workshop in the wake of the book discussion to address racial inequalities in Dane County.
The event, entitled “Equity vs. Equality: An Examination of Racial Inequalities That Exist in Dane County,” will take place from 9:00 AM to 12:00 PM in the courtroom at the Middleton Police Station, which is located at 7341 Donna Drive in Middleton. The event will be co-lead by Percy Brown, Director of Equity and Student Achievement at Middleton-Cross Plains Area School District, and Laura Love, Director of Secondary Education at Middleton-Cross Plains Area School District.
Participants in the workshop will discuss current racial inequalities in Dane County, what actions are currently being taken to combat these inequalities within the community, and brainstorm other ways to tackle racial inequalities in the community.
One of the planners of the event, Middleton’s Chief of Police Charles Foulke, says, “The Equity vs. Equality training is a logical step in maintaining the momentum that community leaders have been building to address this very real problem [of racial inequality]. I am pleased to be part of the planning team for this training and feel the Middleton Police Department can be part of the solution.” The Middleton Police Department has been actively engaging with the themes in Just Mercy.
The event costs $10 to participate in, and scholarships are available for those in need.
To register for the event, pick up a form at the Middleton Public Library or the Middleton Outreach Ministry Office. You can also register online by clicking here.
You can email Jim Iliff at [email protected] with any questions you might have or to apply for a scholarship to attend the event. | https://gobigread.wisc.edu/tag/racial-inequalities/ |
3-hydroxyisobutyric aciduria is an organic aciduria with a poorly understood biochemical basis. It has previously been assumed that deficiency of 3-hydroxyisobutyrate dehydrogenase (HIBADH) in the valine catabolic pathway is the underlying enzyme defect, but more recent evidence makes it likely that individuals with 3-hydroxyisobutyryic aciduria represent a heterogeneous group with different underlying mechanisms, including respiratory chain defects or deficiency of methylmalonate semialdehyde dehydrogenase. However, to date methylmalonate semialdehyde dehydrogenase deficiency has only been demonstrated at the gene level for a single individual. We present two unrelated patients who presented with developmental delay and increased urinary concentrations of 3-hydroxyisobutyric acid. Both children were products of consanguineous unions and were of European or Pakistani descent. One patient developed a febrile illness and subsequently died from a hepatoencephalopathy at 2 years of age. Further studies were initiated and included tests of the HIBADH enzyme in fibroblast homogenates, which yielded normal activities. Sequencing of the ALDH6A1 gene (encoding methylmalonate semialdehyde dehydrogenase) suggested homozygosity for the missense mutation c.785 C > A (S262Y) in exon 7 which was not found in 210 control alleles. Mutation analysis of the ALDH6A1 gene of the second patient confirmed the presence of a different missense mutation, c.184 C > T (P62S), which was also identified in 1/530 control chromosomes. Both mutations affect highly evolutionarily conserved amino acids of the methylmalonate semialdehyde dehydrogenase protein. Mutation analysis in the ALDH6A1 gene can reveal a cause of 3-hydroxyisobutyric aciduria, which may present with only slightly increased urinary levels of 3-hydroxyisobutyric acid, if a patient is metabolically stable.
Aminoacylase 1 (ACY1) deficiency is a recently described inborn error of metabolism. Most of the patients reported so far have presented with rather heterogeneous neurologic symptoms. At this moment, it is not clear whether ACY1 deficiency represents a true metabolic disease with a causal relationship between the enzyme defect and the clinical phenotype or merely a biochemical abnormality. Here we present a patient identified in the course of selective screening for inborn errors of metabolism (IEM). The patient was diagnosed with autistic syndrome and admitted to the Children's Memorial Health Institute (CMHI) for metabolic evaluation. Organic acid analysis using gas chromatography-mass spectrometry (GC-MS) revealed increased urinary excretion of several N-acetylated amino acids, including the derivatives of methionine, glutamic acid, alanine, glycine, leucine, isoleucine, and valine. In Epstein-Barr virus (EBV)-transformed lymphoblasts, ACY1 activity was deficient. The mutation analysis showed a homozygous c.1057C>T transition, predicting a p.Arg353Cys substitution. Both parents were heterozygous for the mutation and had normal results in the organic acid analysis using GC-MS. This article reports the findings of an ACY1-deficient patient presenting with autistic features.
BACKGROUND Propionic acidemia is an inherited disorder caused by deficiency of propionyl-CoA carboxylase. Although it is one of the most frequent organic acidurias, information on the outcome of affected individuals is still limited. STUDY DESIGN/METHODS Clinical and outcome data of 55 patients with propionic acidemia from 16 European metabolic centers were evaluated retrospectively. 35 patients were diagnosed by selective metabolic screening while 20 patients were identified by newborn screening. Endocrine parameters and bone age were evaluated. In addition, IQ testing was performed and the patients' and their families' quality of life was assessed. RESULTS The vast majority of patients (>85%) presented with metabolic decompensation in the neonatal period. Asymptomatic individuals were the exception. About three quarters of the study population was mentally retarded, median IQ was 55. Apart from neurologic symptoms, complications comprised hematologic abnormalities, cardiac diseases, feeding problems and impaired growth. Most patients considered their quality of life high. However, according to the parents' point of view psychic problems were four times more common in propionic acidemia patients than in healthy controls. CONCLUSION Our data show that the outcome of propionic acidemia is still unfavourable, in spite of improved clinical management. Many patients develop long-term complications affecting different organ systems. Impairment of neurocognitive development is of special concern. Nevertheless, self-assessment of quality of life of the patients and their parents yielded rather positive results. | https://pub.h-brs.de/solrsearch/index/search/searchtype/authorsearch/author/%22Walter%2C+Melanie%22 |
“It will all come together, I can visualize it,” says Zenon Dance Company Founder and Artistic Director Linda Z. Andrews when I pull her aside between rehearsing Daniel Charon’s “The Storm” and Colleen Thomas’ “Catching Her Tears (44° N, 93° W)” at the Cowles Center for Dance and the Performing Arts. The week leading into a new performance is an especially busy, stressful time for any dance company. As an ensemble moves its work from the rehearsal space to the stage, performers and crew determine spacing, lighting and audio cues, and there’s an inevitable flurry of last-minute decisions that seals the audience experience. And that time of transition is precious for any performance group – but this particular occasion has an undercurrent of bittersweet energy.
For Linda and for Zenon Dance Company, this is the last tech week in their 36-year run. Due to a change in corporate and foundational support, Zenon Dance Company will take to the stage to awe audiences one last time before that decades-long run comes to an end.
Minnesota has been fortunate to have a community of dance artists, independent choreographers and small- to mid-sized dance companies the represent a slew of genres. But Zenon’s name-recognition is synonymous with the Twin Cities dance community – and it’s hard to imagine the landscape without them.
As the dancers rehearse, there’s a palpable note of mourning in their movement and in their voices – but their emotion is also tangled up in a joy rooted in Zenon’s legacy. “The last few weeks have been focused on the job at hand,” says current company member Sarah Steichen Stiles. “I don’t know that it [Zenon’s closing] will really hit me until next October.” That’s when, in a different reality, the company would be preparing for the next show.
“From my perspective,” says Andrews,” I’m just grieving. I am so sad. I feel like it’s premature. I would have liked to have continued another five years at least… But with the funding pulled out from under us like this, it’s just coming along and way too soon.”
In recent history, the funding from corporate sponsors, donors and foundations that arts organizations like Zenon once relied on has taken a turn to focus on individual artists. Where there is give, there is take.
Aiming to elevate the level of dance in the Twin Cities, Andrews founded Ozone Dance School and two dance companies, Rezone Dancers and Just Jazz Dancers, in 1979. The companies merged in 1983, forming Zenon Dance Company, which presented both modern and jazz dance. Acclaimed jazz choreographer Danny Buraczeski joined as Zenon’s co-artistic director from 1989 – 1991 and has continued to choreograph work for Zenon, including “Song Awakened,” which will be performed in the final concert.
Unique in today’s landscape, Zenon Dance Company is a repertory company, which presents work by numerous choreographers as opposed to showcasing one singular voice and vision.
“I decided that it would be more interesting to me, especially to have a more eclectic repertory and to be able to choose choreographers that I was attracted to that I felt had some talent and give them an opportunity to grow and produce work with Zenon,” says Andrews.
Andrews has primarily scouted emerging and established choreographers based in Minnesota and New York, often commissioning them in the critical moment right before they hit it big. Over its tenure, the company has performed work by nearly 200 choreographers, including Bill T. Jones, Bebe Miller, Doug Varone, Tere O’Connor, Kyle Abraham, Seán Curran, Morgan Thorson, luciana achugar and Wynn Fricke.
“I also felt like, as a dancer, I would prefer this type of company if I could continue dancing professionally because I’d be constantly challenged. And this constant challenge and artistic growth is, I think, what has kept so many Zenon dancers dancing for Zenon for multiple years.”
Zenon’s reputation for dancer retention is a testament to the effectiveness of that philosophy. Six-year veteran Steichen Stiles may seem a relative newcomer compared to Leslie O’Neill, who has danced with the company for 13 years, or former company members Greg Waletski and Denise Armstead, who each spent more than 20 years dancing with Zenon. But Stiles Steichen admits that it was the company’s reputation for excellence and its continued focus on working with a diverse assortment of choreographers that both drew her and kept her there. “It fit a lot of my hopes in a dance company. It has something for everyone, both for the dancers and for the audiences.”
This sentiment also speaks to the talent and range Zenon company members have, as well as their commitment to constantly adapt to new movement styles and ways of working.
Former company member and current stage manager Stephen Schroeder said in a 2011 interview with TPT, “Working with choreographers at Zenon, it’s really quite amazing. They come in for whatever short amount of time – three weeks, two weeks, 10 days – and then we have to take as much as we can from that and from them to try and get their stylistic qualities, and try and get the way they move into our bodies. And so you basically have to open yourself up and become like a clean slate or even the paint for the choreographer to then make the piece.”
Andrews admits that, beyond Zenon’s final performance on Sunday, June 16th, 2019, life is a question mark. She is selling her home, getting certified as a yoga instructor and looking forward to spending time with her daughters. But beyond that, her plans still have hazy edges. “I’m going to try to keep the school going. The school is something I want to leave, which I started even earlier than Zenon [Dance Company].”
Many of the company members are also still focused on carving out a life after Zenon. Steichen Stiles shared that she will miss this family of dancers that has banded together, performed together, taken risks together with a shared mission of making the best artistic works together.
“Through the years, some really difficult, desolate years,” Andrews admits, “we were able to survive, and we were able to flourish artistically, never really financially, but artistically, which was what was important to me. My ultimate goal [was to achieve] artistic excellence on the stage. I think we’ve been able to fulfill that, and that was a huge driving force behind my whole life.”
Zenon Dance Company performs its final shows June 13 through 16 at The Cowles Center for Dance and the Performing Arts.
COMPANY: Tristan Koepke, Scott Mettille, Leslie O’Neill, Laura Osterhaus, Sarah Steichen Stiles
JUNIOR COMPANY: J.T. Weaver
APPRENTICE: Emila Bruno
GUEST ARTISTS: Lauren Baker, Mary Ann Bradley, Patrick Jeffrey, Alyssa Soukup
CHOREOGRAPHERS: luciana achugar, Danny Buraczeski, Daniel Charon, Wynn Fricke, Colleen Thomas
Disclaimer: Brittany Shrimpton taught at Zenon Dance School from 2009 -2014.
________________________________________________________________________
This story is made possible by the Arts and Cultural Heritage Fund and the citizens of Minnesota.
________________________________________________________________________
The Twin City flourishes with dancers and dance companies of every variety imaginable. Learn more about “The Minneapolis Brothers Rejuvenating Native Hoop Dance with Hip-Hop.”
Revisit the dynamic collaboration with local dance company Tu Dance and indie folk band Bon Iver in this behind-the-scenes peak at their show “Come Through.”
Discover how Somali youth living in the Twin Cities are keeping their dance traditions alive in this story about the Somali Museum Dance Troupe. | https://www.tptoriginals.org/zenon-dance-company-takes-the-stage-one-last-time/ |
During your university career, you will be asked to complete many writing assignments, including discussions, essays, research projects, case studies, analyses, reflections and myriad others. Use the resources in this guide to help you plan, develop, write, and edit your papers and projects.
CityU Smarthinking
Students who need help with math, business, and writing courses can visit http://www.smarthinking.com
- Connect With a Tutor and interact live.
- Submit your Writing for any class to our Online Writing Lab.
- Submit a Question and receive a reply from a tutor.
All students receive 10 free hours per academic year July-June, for a username and password please contact [email protected]
The Basics
These resources will help you with the fundamentals of academic writing, from grammar and sentence structure, to guiding you through your first university writing assignments.
Getting Started with Writing Tasks
Online academic skills resources (University of New South Wales): Get tips on essay and assignment writing, as well as tips for test-taking, critical reading, and note-taking, among others.
Helpful tools on this site include
The Online Writing Lab (OWL): The OWL at Purdue University provides hundreds of writing resources, including tutorials on the writing process, common writing assignments, and academic writing.
Helpful tools on this site include
Grammar, Punctuation, & Structure
Grammar girl: Quick and dirty tips for writing: A fun and sometimes quirky look at grammar, punctuation, word choice, and more. Find a variety of articles and podcasts from Mignon Fogarty, aka “Grammar Girl.”
See the OWL at Purdue’s guides to GRAMMAR; PUNCTUATION; and MECHANICS (sentence structure).
Academic Writing: Beyond the Basics
These resources are for intermediate/advanced writers and upper-division or graduate students who are tackling such assignments as research or capstone projects, theses and dissertations, or those who are interested in publishing their work.
Baban, S. M. J. (2009). Research: The journey from pondering to publishing. Kingston, Jamaica: Canoe Press.
Murray, N., & Hughes, G. (2008). Writing up your university assignments and research projects: A practical handbook. New York, NY: Open University Press.
Sumerson, J. B. (2013). Finish your dissertation: Don’t let it finish you! Somerset, NJ: John Wiley & Sons.
Sword, H. (2012). Stylish academic writing. Cumberland, RI: Harvard University Press.
Writing Academic Proposals: Conferences, Articles, and Books (OWL at Purdue)
Resources for Faculty
These resources are for faculty who are teaching writing skills or embedding writing skills in their curriculum.
Dartmouth Writing Program. (2013). Online writing materials (pedagogies, methods, and more).
Hinkel, E. (2004). Teaching academic ESL writing: Practical techniques in vocabulary and grammar. Mahwah, N.J: L. Erlbaum Associates.
Wagenmakers, E. J. (2009). Teaching graduate students how to write clearly. | https://library.cityu.edu/howto/apa-writing/get-help-with-writing-assignments/ |
BACKGROUND
SUMMARY
DETAILED DESCRIPTION
Virtual and/or physical computing machines that are running applications can undergo times of stress. For example, a machine may be stressed when it has a large volume of network requests being serviced, when it has a large amount of its processing or memory capacity being used at a particular time or for an extended period of time, when a large number of requests (e.g., write requests or read requests) are queued at a particular time or for an extended period of time, etc. During those times of stress, the machines' performance can be positively or negatively affected by one or more configuration settings (e.g., registry keys, file versions, number of network cards, etc.). Identifying those settings that can have a positive or negative effect on a machine's performance under stress has generally been done with a trial-and-error approach.
Some systems facilitate collection of information from computer systems and applications. Such systems can provide and report information that may assist in identifying issues with product design or code.
The tools and techniques described herein relate to inferring the effects of configuration points on the performance of computing machines to which those configuration points apply. As used herein, configuration points are points of configurations that can be changed to different values, i.e., different configuration settings. For example, a configuration point may be able to have configuration settings that are numerical values within some range. As another example, a configuration point may have configuration settings that can be varied between on and off, or some similar indication (yes or no, high or low, etc.). As yet another example, a configuration point may have configuration settings that can be varied between a discreet number of different options, such as options selected from a menu.
In one embodiment, the tools and techniques can include collecting configuration data and performance data from computing machines running a target program. Periods of stress for the computing machines can be identified using the performance data, and a set of the computing machines can be grouped under a stress profile using the performance data. One or more configuration points can be identified on the set of machines, and an effect of each of the configuration points on performance of the set of machines can be inferred using the configuration data and the performance data.
In another embodiment of the tools and techniques, periods of stress for computing machines running a target program can be identified by analyzing performance data. A set of the computing machines can be grouped under a stress profile using the performance data. Additionally, one or more configuration points on the set of machines can be identified, and an effect of each of those configuration point(s) on performance of the set of machines can be inferred using the performance data and configuration data for the set of machines. The inferred effect(s) of the configuration point(s) can be used to determine a baseline set of configuration settings.
This Summary is provided to introduce a selection of concepts in a simplified form. The concepts are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Similarly, the invention is not limited to implementations that address the particular techniques, tools, environments, disadvantages, or advantages discussed in the Background, the Detailed Description, or the attached drawings.
Embodiments described herein are directed to techniques and tools for improved inference of how configuration points affect performance of computing machines. Such improvements may result from the use of various techniques and tools separately or in combination.
Such techniques and tools may include identifying a set of configuration settings that can be either positively or negatively related to a computing application's or computing machine's performance, such as performance of a server or application. Identification of these configuration settings can be done through mining of configuration and performance data collected from sets of computing machines running a target program being analyzed. The mining can include identifying periods of stress, grouping machines under similar stress, identifying configuration points on those machines, and inferring those configuration points' effect on performance of those machines. Such effects can be inferred using configuration data and performance data even if a definite causal mechanism between the configuration point or setting and the performance is not identified.
Using the inferred effects on performance, a baseline set of configuration settings can be determined. The baseline set of configuration settings may have one or more positive inferred effects on performance and/or avoid one or more negative inferred effects on performance. The settings in the baseline set may be represented in different ways, such as an option chosen from a discreet set of options, a numerical value, a range of values, a combination of values and/or options, a negatively-stated value or option (which includes values or options except specified values or options), etc.
Thus, the tools and techniques described herein can include deterministically using collected data to determine a set of performance levels and performance effects, related to configuration settings and performance under stress. A performance level is some behavior, seen on one or more instances of a technology. Performance levels can be identifiable via instrumentation of the running computing machines and/or or management systems that are managing the computing machines. A configuration point is a measurable group of one or more configuration settings that may affect one or more performance levels.
Over time the baseline configuration set for a target application can change due to updated versions of software and patches, changes in ambient conditions, and/or other changes. Accordingly, the tools and techniques described herein can allow for updating of baseline configuration sets to account for such changes and/or to fine-tune baseline configuration sets.
As noted above, data collected from a set of machines can be used to create baseline configuration sets and then used to suggest changes to be made on computing machines.
Accordingly, one or more benefits can be realized from the tools and techniques described herein. The baseline configuration sets can be determined/inferred from the sets of machines running a specific application under varying configurations and varying types of stress. The baseline configuration sets may be communicated to computing machines and/or used to suggest changes to be made to configuration settings on computing machines. Such changes may improve performance of those machines.
The subject matter defined in the appended claims is not necessarily limited to the benefits described herein. A particular implementation of the invention may provide all, some, or none of the benefits described herein. Although operations for the various techniques are described herein in a particular, sequential order for the sake of presentation, it should be understood that this manner of description encompasses rearrangements in the order of operations, unless a particular ordering is required. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, flowcharts may not show the various ways in which particular techniques can be used in conjunction with other techniques.
Techniques described herein may be used with one or more of the systems described herein and/or with one or more other systems. For example, the various procedures described herein may be implemented with hardware or software, or a combination of both. For example, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement at least a portion of one or more of the techniques described herein. Applications that may include the apparatus and systems of various embodiments can broadly include a variety of electronic and computer systems. Techniques may be implemented using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Additionally, the techniques described herein may be implemented by software programs executable by a computer system. As an example, implementations can include distributed processing, component/object distributed processing, and parallel processing. Moreover, virtual computer system processing can be constructed to implement one or more of the techniques or functionality, as described herein.
I. Exemplary Computing Environment
FIG. 1
100
illustrates a generalized example of a suitable computing environment () in which one or more of the described embodiments may be implemented. For example, one or more such computing environments can be used as a computing machine providing data, a computing machine inferring effects of configuration on performance, and/or a computing machine that can implement configuration setting changes indicated by a set of baseline configuration settings. Generally, various different general purpose or special purpose computing system configurations can be used. Examples of well-known computing system configurations that may be suitable for use with the tools and techniques described herein include, but are not limited to, server farms and server clusters, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
100
The computing environment () is not intended to suggest any limitation as to scope of use or functionality of the invention, as the present invention may be implemented in diverse general-purpose or special-purpose computing environments.
FIG. 1
FIG. 1
100
110
120
130
110
120
120
180
With reference to , the computing environment () includes at least one processing unit () and at least one memory (). In , this most basic configuration () is included within a dashed line. The processing unit () executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. The at least one memory () may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory), or some combination of the two. The at least one memory () stores software () implementing inferring effects of configuration points on computing machine performance.
FIG. 1
FIG. 1
FIG. 1
FIG. 1
Although the various blocks of are shown with lines for the sake of clarity, in reality, delineating various components is not so clear and, metaphorically, the lines of and the other figures discussed below would more accurately be grey and blurred. For example, one may consider a presentation component such as a display device to be an I/O component. Also, processors have memory. The inventors hereof recognize that such is the nature of the art and reiterate that the diagram of is merely illustrative of an exemplary computing device that can be used in connection with one or more embodiments of the present invention. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “handheld device,” etc., as all are contemplated within the scope of and reference to “computer,” “computing environment,” or “computing device.”
100
100
140
150
160
170
100
100
100
FIG. 1
A computing environment () may have additional features. In , the computing environment () includes storage (), one or more input devices (), one or more output devices (), and one or more communication connections (). An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment (). Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment (), and coordinates activities of the components of the computing environment ().
140
100
140
180
The storage () may be removable or non-removable, and may include computer-readable storage media such as magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment (). The storage () stores instructions for the software ().
150
100
160
100
The input device(s) () may be a touch input device such as a keyboard, mouse, pen, or trackball; a voice input device; a scanning device; a network adapter; a CD/DVD reader; or another device that provides input to the computing environment (). The output device(s) () may be a display, printer, speaker, CD/DVD-writer, network adapter, or another device that provides output from the computing environment ().
170
100
The communication connection(s) () enable communication over a communication medium to another computing entity. Thus, the computing environment () may operate in a networked environment using logical connections to one or more remote computing devices, such as a personal computer, a server, a router, a network PC, a peer device or another common network node. The communication medium conveys information such as data or computer-executable instructions or requests in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
100
120
140
The tools and techniques can be described in the general context of computer-readable media, which may be storage media or communication media. Computer-readable storage media are any available storage media that can be accessed within a computing environment, but the term computer-readable storage media does not refer to propagated signals per se. By way of example, and not limitation, with the computing environment (), computer-readable storage media include memory (), storage (), and combinations of the above.
The tools and techniques can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment. In a distributed computing environment, program modules may be located in both local and remote computer storage media.
For the sake of presentation, the detailed description uses terms like “determine,” “choose,” “adjust,” and “operate” to describe computer operations in a computing environment. These and other similar terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being, unless performance of an act by a human being (such as a “user”) is explicitly noted. The actual computer operations corresponding to these terms vary depending on the implementation.
II. System and Environment for Inferring Effects of Configuration on Performance
FIG. 2
200
200
210
220
220
210
200
230
232
234
234
230
242
244
234
230
234
230
232
234
234
242
244
is a schematic diagram of a system () in conjunction with which one or more of the described embodiments may be implemented. The system () can include an inference system (), which can access a data store (). The data store () may be included in the inference system (), or part of a separate system. The system () can also include target machines (), which can each be running a target program () and an agent (). The agent () on each machine () can access instrumentation to collect performance data () and configuration data () from the machine. Alternatively, the agent () may be running outside the target machine (). For example, an agent () could be running on a machine that is managing one or more target machines () that are running the target program (). Each agent () can be a program that runs in each target machine to identify support issues that may arise. The agent () may collect the data ( and ) using one or more of various techniques for collecting performance and configuration data from a running system, such as by making and receiving application programming interface calls, sending information request messages, polling states of physical devices, etc.
232
232
232
The target program () could be any of various programs, which could include one or more sub-programs or components. For example, a target program () can be a server application, such as a database server application, a Web server application, a file server application, etc. The target program () could be some other type of program, such as a word processing application, an operating system, etc.
234
242
244
220
230
242
244
230
242
244
234
230
220
220
242
244
242
244
210
The agents () can provide the performance data () and configuration data () to the data store (). In one implementation, particular target machines () may provide such data ( and ) if user input has been provided at the target machine () to opt in or to decline to opt out of providing the data ( and ). Additionally, the agents () can take measures to avoid collecting personal identifiable information, and security precautions can be taken to protect data sent from the target machines () to the data store (). The data store () can store the performance data () and configuration data (), and can provide the data ( and ) to the inference system ().
210
242
244
244
230
232
200
242
244
230
210
The inference system () can use the performance data () and configuration data () to infer effects of configuration points from the configuration data () on performance of the target machines () running the target program (). In order to identify performance effects of configuration settings inside the system (), the data ( and ) collected from the system can be mined to identify relationships of measurable configuration points to performance characteristics of particular target machines () under stress. Specific configuration points to be analyzed may be received in the inference system (), such as where the configuration points are defined by user input. The configuration points may be any of various configuration points that can be set to different settings, such as registry keys, file versions, number of network cards, etc.
230
230
242
244
232
232
232
The data mining can include identifying periods of stress versus periods of non-stress for the target machines () being monitored. The periods of stress for the target machines () can be correlated and similar types of stress can be identified. This identification can be done by matching a stress profile that can include one or more stress types or characteristics. Stress profiles may be received from user input and/or modified based on analyzing the data ( and ). For example, if the target program () is a Web server program, a stress profile can be a count of requests per unit time that is above a specified level. As another example, if the target program () is a database server program, a stress profile can be a number of data writes and/or data reads in a database per unit time. As yet another example, a stress profile can be a processor sustained over a predefined percentage of full capacity for a specified period of time. As yet another example, a stress profile can be memory usage sustained over a certain percentage of available memory for a specified period of time. Additionally, a stress profile can include a combination of different kinds of stress. For example, if the target program () is a database server program, the stress profile could be a number of write actions per unit of time being above a specified level, and a write queue including a specified number of write requests, both for a specified period of time. As another example, a stress profile could include a medium-level network request threshold combined with a specified disk input/output level threshold.
FIG. 3
FIG. 2
FIG. 3
210
305
310
Referring now to , a schematic representation of analyzing machine representations will be discussed. This analysis may be done in the inference system () of . The machine representations can include information regarding respective target machines, including performance data and configuration data. The periods of stress meeting characteristics of a performance profile can be correlated so that the ungrouped machine representations () can be grouped according to stress profiles into grouped machine representations () (i.e., the machines can be grouped according to stress types). illustrates one machine representation group, but the analysis could include grouping the representations into multiple groups, with each group meeting a different stress profile. The grouping can allow machines of similar expected performance characteristics (i.e., those under similar types of stress) to be analyzed together in the inference system.
310
310
322
324
FIG. 3
The inference system can identify configuration settings on the machines in a group, and infer their effect on the machines' performance. From groups of machines under similar types of stress, but across machines having different performance levels, it can be determined for each setting being analyzed whether that setting positively or negatively affects a specified type of performance being analyzed. For example, the grouped machine representations () can be distinguished into groups according to the performance levels of the respective machines being represented and analyzed. As illustrated in , the machine representations () may be grouped into representations of “good” machines () exhibiting good performance of a specified type, and representations of “poor” machines () exhibiting poor performance of the specified type. Other levels could also be specified such as a neutral performance level, a “very good” performance level, a “very poor” performance level, etc. The different performance levels can be defined by user input and/or automated grouping techniques. Also, the grouping of individual machines could be done automatically, and/or in response to user input. For example, a sorted list of machine performance levels can be displayed, and user input can be received to provide a specified cutoff between performance levels.
322
324
322
324
322
324
310
322
324
For each configuration point being analyzed, it can be determined whether a setting is more prevalent in the representations of good machines () or in the representations of poor machines (). If a setting is not aligned with the representations of good machines () or the representations of poor machines () to a statistically significant extent (which may be determined according to specified parameters or determined directly from user input), the setting may be inferred to have a neutral effect on performance. For example, if the setting is the same across the representations of good machines () and the representations of poor machines (), or if the setting is different across all the good and poor machine representations (), it may be inferred that the setting does not affect the performance of the machines. As another example, if a configuration point is set to a first setting for the representations of good machines () but a second setting for the representations of poor machines (), the configuration point can be said to affect performance. The first setting can be inferred to have a positive effect on performance, and the second setting can be inferred to have a negative effect on performance.
332
334
350
Once the settings and their relative effects on the machines' performance has been inferred as discussed above, good settings to be used () (i.e., those with an inferred positive effect on performance) and/or bad settings to be avoided () (i.e., those with an inferred negative effect on performance) can be specified as such in a baseline set of configuration settings ().
FIG. 2
250
230
250
230
250
230
230
250
234
230
250
250
Referring back to , the baseline set of configuration settings () can be used for future comparisons against the target machines () from which data was collected and/or other machines to identify issues that the machines may experience. For example, the baseline set of configuration settings () may be communicated to a machine (), and the settings in the baseline set of configuration settings () may be suggested as settings for the machine (). In one example, the settings of a machine () may be identified, and discrepancies between the settings of the machine and the baseline set of configuration settings () may be determined and presented. A tool, which can be a program module that is part of the agent () on the machine (), may be used to change the configuration settings to match those of the baseline set of configuration settings (). These changes may be made in response to an automated comparison to the baseline set of configuration settings (), or the changes may be made in response to user input (user input indicating the changes, user input approving the changes, etc.).
250
250
250
250
250
230
250
After the baseline set of configuration settings () has been defined, those settings can be analyzed again at different times and/or using different machines to see if the baseline set of configuration settings () are still valid. This could result in the baseline set of configuration settings () being modified as a result of such additional analysis. Also, the baseline set of configuration settings () may be displayed and user input may be received to approve and/or modify the settings. For example, this user input may be received from one or more software development experts. Such modified baseline set of configuration settings () can be used in the same way as discussed above, which can result in changes being made to settings of target machines () and/or other machines where the baseline set of configuration settings () are used.
III. Techniques for Inferring Effects of Configuration on Performance
Several techniques for inferring effects of configuration on performance will now be discussed. Each of these techniques can be performed in a computing environment. For example, each technique may be performed in a computer system that includes at least one processor and at least one memory including instructions stored thereon that when executed by the at least one processor cause the at least one processor to perform the technique (one or more memories store instructions (e.g., object code), and when the processor(s) execute(s) those instructions, the processor(s) perform(s) the technique). Similarly, one or more computer-readable storage media may have computer-executable instructions embodied thereon that, when executed by at least one processor, cause the at least one processor to perform the technique.
FIG. 4
410
420
430
440
450
Referring to , a technique for inferring effects of configuration on performance will be described. The technique can include collecting () configuration data and performance data from a plurality of computing machines running a target program. Periods of stress for the computing machines can be identified () using the performance data. A set of the computing machines can be grouped () under a specified stress profile using the performance data, and one or more configuration points can be identified () on the set of machines. An effect of each of the one or more configuration points on performance of the set of machines can be inferred () using the configuration data and the performance data.
The inferred effect of the configuration point(s) can be used to determine a baseline set of configuration settings. At least a portion of the baseline set of configuration settings may be communicated to computing machine(s), and/or suggested to computing machine(s). For example, one or more changes on one or more computing machines may be suggested to match at least a portion of the baseline set of configuration settings. Additionally, one or more settings in a set of computing machine(s) can be changed to match at least a portion of the baseline set of configuration settings. Also, the baseline set of configuration settings may be changed after analyzing additional configuration data and performance data. For example, the baseline set of configuration settings may be changed in response to the analysis of the additional configuration and performance data.
420
430
Identifying () periods of stress may include receiving user input defining one or more stress definitions, and analyzing the performance data to identify one or more machines meeting at least one of the stress definition(s). Also, grouping () the set of machines under the specified stress profile can include analyzing the performance data to determine whether one or more machines producing the performance data meets the stress profile. The stress profile may define a type of stress as a combination of different types of stress, or a single type of stress. User input that defines the stress profile may be received.
450
450
450
Inferring () an effect of each of the one or more configuration points can include, for each machine in the set of machines, determining a performance level. For example, a performance level could indicate a level of positive effect on performance of a specified type, a level of negative effect on performance of a specified type, or a neutral effect on performance of a specified type. Inferring () an effect can include, for each of the configuration point(s), identifying machines having a particular setting for that configuration point and determining a performance level for that machine. Inferring () an effect may include producing a value representing a number (e.g., a percentage) of machines having a specified setting for a configuration point and a specified performance level. For example, an effect of a setting may be inferred to be a positive effect if a specified percentage (e.g., greater than 70%, greater than 80%, greater than 90%, etc.) of configuration points with that setting are on machines with a performance level that is determined to be good according to specified parameters.
FIG. 5
510
510
520
520
520
Referring to , another technique for inferring effects of configuration on performance will be described. The technique can include identifying () periods of stress for multiple computing machines running a target program. The identification () can be done by analyzing performance data. A set of machines can be grouped () under a specified stress profile using the performance data. This grouping () can include determining that the set of machines meets characteristics of the stress profile. Grouping () the set of machines under the specified stress profile can include analyzing the performance data to determine whether one or more machines producing the performance data meets the stress profile. The stress profile may be defined in various ways to define a specified type of stress, such as by defining a combination of different types of stress or a single type of stress.
530
540
540
540
540
The technique can also include identifying () one or more configuration points on the set of machines. An effect of each of the one or more configuration points on performance of the set of machines can be inferred () using configuration data for the set of machines. The inferring () can be performed using the configuration data and the performance data. Inferring () may include, for each of the configuration point(s), identifying machines having a particular setting for that configuration point and determining a performance level for that machine. Inferring () may further include producing a value representing a number of machines having a specified setting for a configuration point and a specified performance level.
550
560
560
560
The inferred effects of the configuration point(s) can be used to determine () a baseline set of configuration settings. The technique can further include using () the baseline set of configuration settings for one or more computing machines. For example, using () the baseline set of configuration settings can include communicating at least a portion of the baseline set of configuration settings to one or more computing machines and/or suggesting at least a portion of the baseline set of configuration settings to one or more computing machines. Using () the baseline set of configuration settings can include changing one or more settings in a set of one or more computing machines to match at least a portion of the baseline set of configuration settings.
510
520
530
540
550
560
560
FIG. 5
The technique may also include changing the baseline set of configuration settings after analyzing additional configuration data and performance data. For example, this may include repeating steps of identifying () periods of stress, grouping (), identifying configuration point(s) (), inferring () effects, and determining () the changed baseline set of configuration settings. The new baseline set of configuration settings can then be used (). Changing the baseline set of configuration settings may also include collecting additional performance and/or configuration data, such as collecting additional data after the baseline set of configuration settings have been used (). As is illustrated by the continuous loop in the flowchart of , this analysis and changing of the baseline set of configuration settings may be repeated so that the baseline set can be adjusted as conditions change and/or more data becomes available for analysis.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
is a block diagram of a suitable computing environment in which one or more of the described embodiments may be implemented.
FIG. 2
is a schematic diagram of a system for inferring effects of configuration on performance.
FIG. 3
is a schematic diagram illustrating analysis of machine representations.
FIG. 4
is a flowchart of a technique for inferring effects of configuration on performance.
FIG. 5
is a flowchart of another technique for inferring effects of configuration on performance. | |
ARGOSY had the most liked content!
Let's agree to disagree. If you think a lack of studies in a needed area dispels my evidence, feel free to think that.
There currently exists tangible evidence that the extent of solar particle penetration directly affects decay. Fact. The strength of the magnetic field, and air pressure directly affect solar particle penetration. Fact. Science is based on evidence-based based theories. Mine is a direct evidence-based theory that the strength of the magnetic field and atmospheric pressure directly affect solar particle penetration, fluctuations which are proven to affect decay. Put your head in the sand, my theory is evidence based, and science is based on evidence-based theories.
You obviously never understood my logic concerning small effect is currently showing small changes, the logical projection is that a large effect could cause large changes to decay rates. You don't see the logic, oh well.
If that makes you feel confident in radiometric dating methods, I am happy to agree to disagree. I laid out my logic, not expecting you to discard your precious time frames over something as small as logical possibilities.
The effect is minimal on the shorter half-lives. And not measured on the ones relevant to radiometric dating. You are welcome to do your own research, you have been asking me to do your research.
The effect is on many levels, there is the 33 day effect, based on the sun's core rotation, there is the solar flare effect, there is the seasonal effect, the midnight effect. All these variations in solar penetration have a direct effect on decay rates, minimal but detectable on the shorter half-life isotopes. Increase solar penetration slightly we get a slight slowdown in decay, what if we DECREASE solar penetration DRAMATICALLY though a stronger magnetic field combined with high air pressures? Could it be possible we get a dramatic increase in decay? Slight change = slight effect, dramatic change = ? The possibilities exist.
The isotopes are silicon-32 (half life 710 years) and radium-226 (half-life 1600 years) Now that we have discovered decay is not constant we need studies on variation of long half-life isotopes for example: uranium-235 (700 million years) lead-206 (4.5 billion years) Without studying the Purdue effect on those isotopes actually used in radiometric dating, we can never be confident of radiometric dating again. And this is not just an exaggerated hypothesis, the more the solar energy, the LESS the decay. For an isotope losing energy really fast (decay of half-life 710 years) the re-energising does have an effect, but minor. It is not an unrealistic possibility that for an isotope losing energy really slowly, the re-energising could have a massive effect on the rate of decay.
You are correct, I should have said Citrates, not Nitrates. My apology if the spelling error caused some confusion.
My reason is.... the Purdue effect. Since radiometric decay is known to be inconsistent, the extent of this inconsistency has not been measured accurately with long-life isotopes under all conditions. We shall have to agree to disagree here. And in the absence of reliable scientific data, yes I will go with the biblical timeframes, at least it is an ancient dependable book, unlike the constancy of decay rates which are no longer dependable.
Let us agree to disagree, the Purdue effect was minor on short-life isotopes, we don't know it's effect on long-life isotopes, so the Purdue effect puts great doubt on the dependability of long-life isotopes until the effect is measured. And magnetic fields have to be taken into account, because the Purdue effect is based on changes to the penetration of solar radiation, and magnetic fields have a strong influence on the penetration of solar radiation.
I don't have to prove time-periods, because radiometric dating is in doubt.
You are not facing the fact that the measured changes were for short-life isotopes. What is the Purdue effect on long life isotopes? You should be curious about this, because it can put the entire concept of radiometric dating into jeopardy. You refer to date consensus, I agree in some cases there is consensus and therefore radiometric dating is a reasonable measurement of RELATIVE timescales. But no-one actually knows how the Purdue effect, affects long-life istotopes so the actual dates are completely in doubt. In addition if a sudden slight increase of solar radiation can SLOW decay, what effect will a permanent blockage of much solar radiation have during times of increased magnetic field strength? May I suggest a sure-fire method on how to face these questions? Insist you are correct with no evidence to support your position, and put your head-in-the-sand. It's working for you so far In the meantime radiometric dating is completely in doubt. Uncertain territory. | https://www.worthychristianforums.com/profile/152218-argosy/ |
With almost no exceptions, the winner of a presidential election in the United States of America has been won by the nominee of a major political party. While no one can predict the future, it seems a safe bet to assume that either the nominee of the Democratic or the Republican Party will win the general presidential election this fall. With this assumption in mind, let’s look at who the potential nominees are in both parties.
In the Republican primary, four candidates are still in the race— John Kasich, Marco Rubio, Ted Cruz and Donald Trump. The Republican nominee will be chosen at the Republican National Convention this summer in Ohio. Each state sends a certain number of delegates who, in the first round of voting, are committed to supporting a particular candidate. In the Republican race, if a candidate can accumulate 1,237 pledged delates from the states before the convention, he or she is guaranteed the nomination. At the present moment, Donald Trump leads with 338 delegates, Ted Cruz has 236, Rubio follows with 112, and John Kasich trails with 27.
In the Democratic primary, the race has narrowed to two candidates— Hillary Clinton and Bernie Sanders. The Democratic process is similar to the republican process, with the exception that the Democratic Party has substantially more super delegates, or party officials who are not pledged to follow the will of the voters, and can vote at the convention for any candidate that they choose. At the present moment, Clinton leads with 607 pledged delegates, while Sanders trails with 412. Clinton has also claimed the verbal support of the vast majority of super delegates, though if Sanders can win the popular vote and the majority of pledged delegates, it is likely that super delegates may switch their allegiance to the winner of the popular vote. The Democratic nominee will need 2,383 delegates, including both super delegates and pledged delegates,to secure the nomination.
By this question, I assume you are asking about the process of determining who will win the 2016 presidential election and why.
Each candidate will go through the primary process in his or her party. The candidate that emerges from his/her caucus/primary process with the majority of that party’s delegates will represent that party in the general election.
In the general election, each candidate will try to win as many states as possible. Each state carries with it a certain number of electoral votes. Candidates will focus on winning states with a large number of electoral votes as well as focusing on certain swing states. Swing states are states that could go either for the Republicans or for the Democrats. The winner of the election will be the candidate that gets at least 270 electoral votes.
If you are asking which person will win the election, I will explain who may emerge from the primary process and what that person will need to do in the general election. This is not an endorsement of any individual or any party.
For the Democrats, either Hilary Clinton or Bernie Sanders will be the candidate. Some key questions for Bernie Sanders are if he has enough support from people of color and if he has more than one major issue on which to base his campaign. For Hillary Clinton, she needs to convince voters she is not tied to Wall Street and can represent the interests of the average person. She also needs to appeal to younger voters. My guess is that Hillary Clinton will get the nomination for the Democrats.
For the Republicans, the candidate will most likely be either Donald Trump or Ted Cruz. For Donald Trump, he will need to convince people that he can act presidential. Putting other people or down or insulting them wouldn’t work well if he was president. Ted Cruz needs to convince people that he is not too far to the right. If he is viewed as extreme, this could hurt him. It is hard to predict where Republican support will go once the field is narrowed, but I’m guessing it will be Donald Trump who gets the Republican nomination.
It is my belief the Democrats will win the election. Voter turnout will be key. It there is a big turnout of voters this will help the Democrats significantly. I also believe enough people will be frightened by the positions Ted Cruz will take on many issues. He may be viewed as too extreme. Some will be frightened by Donald Trump’s blunt responses. They will feel he won’t act presidential. Others will question whether the needs of the average person can be understood by either of these Republican candidates.
Please understand this is only a prediction. It is not saying that one candidate or party is better than another candidate or party. My answer is based on polls and comments from political analysts. The real result will be known on election night. | https://www.enotes.com/homework-help/who-win-presidential-election-bernie-trump-clinton-632031 |
Author(s):
John L. Hardenbergh, William McKendry, William Elliott Griffis & Simon L. Adler
Date Published: 2010/11
Page Count: 176
Softcover ISBN-13: 978-0-85706-396-0
Hardcover ISBN-13: 978-0-85706-395-3
A decisive campaign of the American War of Independence
The fast moving political situation of the latter part eighteenth century in America impacted upon the indigenous Indian tribes of the eastern woodlands as old loyalties and allegiances were fractured by the wars between European powers. The French in North America had but lately been deposed by the British when a new war broke out between the American colonists and the Crown. The Iroquois had remained loyal to the British but now the six nations were divided. Four tribes, the Mohawks, Cayugas, Onondagas and Senecas, remained faithful to their British allies whilst the Tuscaroras and Oneidas allied themselves to the new nation of the United States. Now Iroquois fought Iroquois. Nevertheless the power of the four nations, especially operating as guerrilla troops combined with Tory troops and Rangers could not be ignored as a substantial threat. In 1779 Congress decided to break the influence of the Iroquois decisively and forever. General John Sullivan and his troops of the Continental Army embarked on a scorched earth campaign which destroyed numerous Indian villages and brought the Indians and Tories to defeat at the Battle of Newtown. The action all but put an end to attacks by Loyalists and Indians. The survivors reeled back into Canada, but the hardship caused to the tribes by this crushing defeat resulted in many deaths by starvation and cold in the following winter. This history of the Sullivan Campaign is available in softcover and hardback with dustjacket. | https://www.leonaur.com/index.php?route=product/product&path=65_117_304&product_id=682 |
World War II (1939-1945) - 1939: The deadliest conflict in human history officially began on September 1, 1939 when Nazi Germany invaded Poland (although technically it had begun two years earlier when Japan invaded Manchuria; and even Italy, an Axis Power, invaded Ethiopia two years prior to that). The winds of war had been building for much of the previous decade as several of the world's most powerful countries had become dictatorships with an eye on expansion...and revenge. World War I had ended with harsh terms imposed on Germany, and Adolf Hitler was not happy about it. As he rose to power (see Hitler above) he was determined to avenge Germany of its loss. He began by rebuilding the country's military in violation of the Treaty of Versailles. He was able to get away with it because much of it was done secretly, and though some of it had been monitored, no one was anxious to get into another war so soon after the previous one. This ambivalence emboldened Hitler to start making more aggressive moves. In March 1936, he sent troops into the Rhineland, another violation of the Versailles Treaty. France and Great Britain responded by largely looking the other way. Austria was annexed in March 1938. This time France and England voiced opposition, but made no move to reverse it. The Allied Powers had decided on a policy of appeasement. So the following month Hitler started making overtures toward the Sudetenland, a region of Czechoslovakia with a predominantly German population. This time the Western nations had to intervene. British Prime Minister Neville Chamberlain met directly with Hitler in September. Hitler made it clear that Sudetenland must be ceded to Germany or there would be war. Chamberlain gave in and the Munich Agreement was signed. He famously flew back to England, produced the document and proclaimed "peace for our time". That peace lasted less than another year when Hitler invaded Poland. But before he did, he secured a non-aggression pact with Stalin of Russia. That was the treaty that gave him the green-light on Poland. A full invasion into a sovereign country could not go unanswered, and on September 3rd, Great Britain and France declared war on Germany.
1940: Germany's first campaign of 1940 was into the Scandinavian countries of Denmark and Norway. Denmark fell in a single day and, despite support from the Allied Powers, Norway was defeated in two months. Combat in World War II was the exact opposite of what it was in the First World War. That conflict was marked by trench warfare; grindingly slow. Hitler, on the other hand, rolled over his opponents in a type of fighting known as Blitzkrieg (lightning) warfare. Dissatisfaction among the British public over the ease with which Norway fell forced Neville Chamberlain to resign in May. He was replaced by Winston Churchill. The very same day Churchill became Britain's prime minister, Hitler invaded France. France was one of the Allied Powers and had fought Germany to a standstill in World War I. Furthermore, after that war, it took steps to make sure any future attack by Germany could be halted in its tracks by building a massive fortification that ran the entire length of the border between the two countries. It was called the Maginot Line. It turned out to be a non-factor. Germany simply went around the Maginot Line by invading through Belgium. France had been conquered in six weeks. One of the famous events of the French invasion was the evacuation of about 300,000 troops (both French and British) at Dunkirk. They were ferried across the English Channel by whatever boats available, both military and civilian. The German army had them surrounded and should have been able to kill or capture them to a man. Prime Minister Churchill called their safe evacuation a "miracle of deliverance". The signing of France's surrender on June 22 was said to be the most satisfying moment of Hitler's life. He chose the train car, known as the Compiègne Wagon, the very same spot where Germany surrendered in 1918, as the location for the armistice With the defeat of France, Nazi Germany controlled the whole of Western Europe with the exception of the British Isles (and the Iberian Peninsula which stayed out of the war). At this point and for the next year, Great Britain would be the sole power fighting the German war machine. In a bold, yet controversial move, Churchill ordered the destruction of the French navy off the coast of Algeria fearing it would fall into Nazi hands, and be used in an invasion of England. It resulted in the deaths of 1,300 French sailors. But to countries like the United States, it demonstrated Great Britain's refusal to surrender or make peace with Germany. In fact, Hitler did offer an armistice with England. He would leave England alone so long as it did not interfere with Germany's conquest of Europe. Churchill refused. And so in July, the Battle of Britain began. Despite continuous air raids by the Luftwaffe over the next several months, Great Britain would not yield. The RAF wore down the Luftwaffe, and by October, Hitler had to abandon his planned invasion of England.
1941: 1941 began with the Axis Powers riding high and the Allies reeling. In fact, Great Britain was the only Allied Power still standing and the Axis were very close to victory. In the East, Japan was conquering virtually unopposed. The island nation had briefly been at war with the Soviet Union, but the two countries made peace after the Battle of Khalkin Gol (a battle won by the Soviets), leaving Japan with no significant enemies. But things began to change in 1941 based on the Axis Powers' own actions. And, although they would end 1941 with even more territory in which they began, their decisions would ultimately sow the seeds of their own defeat. In Europe, Hitler, frustrated by his inability to break England, turned his attention East, and violated his non-aggression pact with Russia. He invaded on June 22, 1941 in a campaign codenamed Operation Barbarossa (named after Emperor Frederick I). Well aware of Napoleon Bonaparte's failed invasion, Hitler was determined not to make the same mistakes. Mainly, instead of going straight for the capital, he divided Soviet territory into three targets and would take all three. In Moscow, Stalin was caught completely off-guard and was said to have suffered some sort of mental of emotional breakdown. The Nazi army initially conquered at will. Within the first two weeks, the Luftwaffe achieved total air superiority and the army advanced about 500 km (311 miles). The Soviet military was finally able to mobilize a defense and meet the Wehrmacht at Smolensk on July 10th. Although they were defeated, they managed to slow the German advance by about two months, which was significant. Besides Moscow, the two other targets for the Nazis were Crimea and the Baltic states. Those objectives were achieved by October, but Moscow still remained. German troops approached the city in November, but the delay at Smolensk proved critical here. The weather had turned cold and the notoriously harsh Russian winter was setting in. That and the fact that the Soviets were able to fortify Moscow with fresh reserves led to the first defeat of Operation Barbarossa (in fact, it was the first German loss of the war, not counting the cancelled invasion of England). The Battle of Moscow was not only a strategic setback for Hitler, it was also a psychological blow for his troops. They began to recall the French defeat 129 years earlier and wonder if the same fate awaited them. In early December, the Nazis withdrew from Moscow and the Soviets were emboldened and began a counter-offensive. By this time Great Britain and the Soviet Union had entered into an alliance to defeat Germany. Winston Churchill knew well of Stalin's reputation for the murder and banishment of his opponents, but he said he would make a deal with the devil in order to defeat Hitler. Meanwhile, the day after Russia began its counter-offensive against Germany, the Japanese Empire attempted to foil any ideas of the United States to halt Japan's expansion in the Pacific. The Imperial Japanese Navy launched a pre-emptive attack on the US Naval Base at Pearl Harbor in Hawaii on December 7, 1941. Several ships, including five battleships were sunk, and about 2,500 lives were lost. The next day, President Franklin Roosevelt delivered his famous speech before Congress in which he described the attack as a "date which will live in infamy". An hour after the speech, Congress passed a formal declaration of war against Japan and the United States was pulled into World War II. Although he was hoping Japan would delay its attack, Hitler supported his ally and declared war on the United States on December 11. The other major theatre of war to erupt was in Africa. Mussolini had become resentful of Hitler's success and had been trying to gain ground in British-held North Africa. Although he had made some initial gains, Great Britain began to beat him back and Hitler had to send forces to bail him out. German troops under the command of Field Marshall Erwin Rommel arrived in Libya in February 1941. By March he launched an invasion that pushed Allied forces back toward Egypt.
1942: With the entrance of both the Soviet Union and the United States into the war, the Axis Powers went from having only one major opponent to having three. The enhanced Allied Powers agreed that the best way to defeat the Axis was one at a time, with Germany being the first target (actually they decided to concentrate on the European theatre which included Italy). However, they disagreed on how to approach the continent. The Americans favored a frontal assault through France, but the British convinced them that it would be too costly at this stage in the war. They favored an indirect attack through Africa and then up through Italy. Eventually they persuaded the Americans of this approach. Axis Europe was compared to a monster that was vulnerable from beneath. And so the initial assault would come through the "soft underbelly" of the continent. In Africa, there was a significant conflict in 1942 which made the "soft underbelly" strategy much more feasible. The 2nd Battle of El Alamein pit two of the most renown generals of the war against each other; Rommel versus Bernard Montgomery. Montgomery was victorious (though he had about a 2:1 advantage in men and machine). For the first time, Germany had become over-extended and had to pull back from Africa. It was so important that it caused Winston Churchill to say, "Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning." In the Pacific, a conflict at least equally as important took place at sea. After Pearl Harbor, Japan had about a six-month window in which it had free-reign over the ocean (pretty much what Yamamoto had predicted). Despite the amount of damage done at Pearl Harbor, the United States got one fortunate break; on the day of the attack, its aircraft carriers were out on maneuvers and escaped damage. So the bulk of its Pacific fleet was still intact. Not only that, but in World War II, the aircraft carrier replaced the battleship as the king of the sea. With planes, it had far greater reach than anything else that put to sea. In June, the pivotal Battle of Midway was fought. It featured three US carriers against four Japanese carriers. In the end, all four Japanese carriers were sunk, while the US lost only one carrier. It was a devastating loss for Japan and marked the end of Japanese hegemony in the Pacific. It also marked the end of Japanese expansion and for the rest of the war, it was forced into a defensive posture. Despite being an industrial power, it could not match America's industrial might and Midway proved more costly to Japan than Pearl Harbor was to the United States. Back in Russia, the Soviet counter-offensive launched at the end of 1941 didn't go very far. Germany halted it at the Battles of Kerch Peninsula and Kharkov (2nd Battle of Kharkov technically). When summer rolled around, the Nazis tried to complete the conquest they began the previous year. But by this time the stakes were raised considerably. An extra year in Russia meant Germany was burning through its resources more quickly than it hoped. The mechanized Wehrmacht desperately needed oil. So this offensive included a push toward the Caucasus Mountains in Southern Russia which were rich in oil. There the decisive battle of the Russian campaign, perhaps of the entire European theatre, took place at Stalingrad. It was and remains the largest tank battle in the history of warfare (recall on the timeline that the Battle of Kadesh was the largest chariot battle in history). It ended in a loss for Nazi Germany and was roughly equivalent to what Midway was for Japan. From that point on, Germany began to lose territory and, like Japan, was forced into a defensive posture.
1943: 1943 began with, for the first time in the war, the Axis Powers on defense and the Allies on offense. However, the newcomer to Europe, the United States was still getting its war legs. When American soldiers first landed in Africa, they were slaughtered at the Battle of Kasserine Pass. Based on this, Germany didn't think the US would make a significant contribution to the Allied effort. But the GIs adapted quickly and soon proved their worth. Germany tried, one last time, to seize the offensive in Russia, but its resources were just too depleted, and the push was short-lived. By July, the Wehrmacht was in retreat on the eastern front, this time for good. Part of the reason was because it had to re-allocate troops to the West after the Allies invaded Sicily in the same month (this was the campaign in which my uncle fought and died - see Robert Craig above). The Sicily campaign would lead to Italy's early exit from the war; and about two weeks after the Allies landed, Mussolini was overthrown and arrested. Also, something new happened; the Allies began bombing the civilian population in Germany. The goal was to break the country's will to fight. July was the month things really started falling apart for the Axis Powers. Russia began its own offensive and started constricting Nazi Germany from the East. The Second Battle of Smolensk, the same location where the Red Army first slowed the Nazi advance two years earlier, resulted in another victory for Russia and accelerated Germany's retreat. Italy surrendered in September when Allied troops reached Italian soil. Sensing for the first time that victory was likely, the three leaders of the Allied Powers met face to face for the first time in Tehran to plan how to knock out Germany and then concentrate on Japan. And speaking of Japan, the Allies gained some unexpected help in the East. The Chinese National Revolutionary Army organized to the point where it managed to defeat Japan at the Battle of Changde. This surprise victory helped the Allied effort tremendously.
1944: In 1944, the Allies were finally ready to do what the Americans had wanted to do initially upon entering the war, invade France. It was known as D-Day (or Normandy Landing) and was one of the most memorable events of the entire war. Codenamed Operation Overlord, it was the largest amphibious assault in history. On June 6th, 160,000 soldiers from 13 different nations landed on the beeches of Normandy in order to first expel the Nazis from France, and then push on to Germany (Berlin being the ultimate target). The invasion was a success and the Germans were once again fighting a two-front war. About the same time the Allies were advancing from the West, the Soviet army was clearing the Wehrmacht out of its own territory. By the end of June, it had pushed the Germans out of Belarus, and by the end of July, they were kicked out of Ukraine and back into Poland. On August 25th, Paris was liberated after four years of occupation by the Nazis. September saw the arrival of Russian troops into the Balkans, and in October, the Soviets invaded Hungary. Many of these countries, like Romania and Finland, shifted from the Axis to the Allied side once German soldiers were expelled or they were relieved of Nazi influence. Also at the same time Operation Overlord was advancing, the Allies were capturing territory from Japan in the Pacific. In a strategy which became known as "island-hopping", they targeted strategically important islands that were lightly fortified and bypassed heavily defended ones. With the bulk of their forces in the European theatre, they had limited personnel with which to fight the Japanese. Nevertheless, in June, the Allies launched the Mariana Campaign. They captured Saipan in July and Guam in August. In a naval engagement, the United States defeated the Imperial Navy at the Battle of the Philippine Sea, in which three Japanese carriers were sunk. The loss severely weakened Japan's hold on its empire of islands. From there American forces invaded the Filipino island of Leyte, followed by the Battle of Leyte Gulf. This was the time in which General Douglas MacArthur proclaimed, "I have returned!" (in reference to when he first retreated from the Philippines and said, "I shall return!"). The United States had now captured territory putting them within range of bombing the Japanese Islands. Before the year ended though, Germany launched one last massive assault at the Battle of the Bulge. The attack caught the Allies by surprise and inflicted heavy casualties, particularly on US troops. It was the single greatest loss of life to Americans in the entire war. The counter-offensive ultimately failed, however, and was in essence a last-gasp effort on the part of Nazi Germany.
1945: Early in 1945, with victory at hand, the three leaders of the major Allied Powers met again, this time at Yalta. They discussed post-war Europe and the occupation of Germany. Meanwhile, in February, Allied troops reached the borders of Germany for the first time; Russia from the east, the United States and Great Britain from the west. The race was on to see who would make it to Berlin first. Most of the high-ranking German officers, sensing defeat was inevitable, wanted to officially surrender to American and British troops, fearing the retribution the Soviets were likely to exact for the invasion of their country. Of course, many Nazi officials were trying to flee Germany altogether, in light of the war crimes tribunals that were likely to follow. By early April, Western Allied troops had crossed the Rhine River and Russia captured Vienna. President Roosevelt died on April 12th, and Harry Truman was sworn in as president. When the news of Roosevelt's death reached Europe, some in Germany thought this was the sign of a miracle victory to come. But his death did nothing to slow down the Allied advancement. The Soviets arrived in Berlin first, on April 21st; American and British troops four days later. On April 28th, Benito Mussolini was executed in Italy. Two days later, Allies captured the Reichstag building and Adolf Hitler committed suicide. His body was incinerated to prevent it from falling into enemy hands. Despite scattered fighting for several more days, the war was essentially over in Europe. Now Japan was the only Axis Power remaining. The United States began bombing Tokyo in March which killed about 100,000 Japanese civilians. It eventually expanded the campaign to include other cities over the next several months, which killed close to another 500,000 people. While the bombing campaign was going on, Allied troops were also capturing islands. In March, US Marines won the famous Battle of Iwo Jima, one of the bloodiest conflicts in the Pacific. It was followed by the Battle of Okinawa. The Allies were preparing for a massive invasion of the Japanese Islands. However, Japan's refusal to surrender in the wake of such massive bombardment, convinced the Allies that an invasion would be extremely costly. So the United States introduced a new weapon into the war; the atomic bomb. On August 6th, a B-29 Superfortress named the Enola Gay dropped one on Hiroshima. It's estimated that between 70,000 and 80,000 people died instantly, and about that many more eventually succumbed to injures, primarily radiation burns, resulting from the blast. Japan still did not surrender. So on August 9th, America dropped another one. The same day, the Soviet Union declared war on Japan. On August 15th, Japan formally surrendered. This brought World War II to a close. When it was all said and done, upwards of 60 million people died as a direct result of the conflict, and remains the only event in which atomic weapons were used during wartime. | https://historiarex.com/e/en/462-world-war-ii-1939-1945 |
The world goes round with this compact coloring book and its circular images of cityscapes. Over 40 stress-relieving illustrations range from the ancient allure of Cairo's pyramids to modern skyscrapers of Hong Kong, Dubai, Madrid, Miami, and other exciting cities — and they all come in a perfect travel-sized package (5 x 7).
Product Details
|ISBN-13:||9780486812762|
|Publisher:||Dover Publications|
|Publication date:||01/18/2017|
|Pages:||96|
|Product dimensions:||5.00(w) x 6.80(h) x 0.60(d)|
About the Author
Zen meets zany in the art of David and La Jeana Brodo, who balance tranquility and madness with one foot in reason and another in the outrageous. Lifelong local artists of central Florida, they actively contribute to the artistic needs of the community through their art and unique perspectives, immersing themselves in the sheer enjoyment of creativity. | https://www2.barnesandnoble.com/w/bliss-cities-coloring-book-david-bodo/1123885133?ean=9780486812762 |
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of U.S. Provisional Patent Application No. 63/196,833 filed Jun. 4, 2021.
BACKGROUND OF THE INVENTION
As industries such as aerospace, automotive, construction, and military containers, there is increased demand for lighter and lighter polymer composites and particularly thermoplastic composites. This trend is due to the composites offering high performance materials with minimal weight in comparison with metal material, such as high strength steels.
Moreover, the thermoplastic materials are suitable for being re-melted and re-molded into new components which is not possible with the inclusion of fibers such as glass or carbon, as such materials cannot be melted down. Also, the thermoplastic materials are suitable to be shredded and used for lower performance reinforced polymer composites.
Self-reinforced polymer composites (e.g., self-reinforced plastics and single polymer composites), are fiber reinforced composite materials. The fiber reinforcement in the materials is highly oriented version of the same polymer from which the matrix is made.
Self-reinforced polymer composites are manufactured from a variety of different thermoplastic polymers such as polyamide, polyethylene, polyethylene terephthalate, ultra-high density polyethylene, ultra-high density polypropylene, and polypropylene.
Stiffness is a property which is augmented as a result of turning a material into a self-reinforced polymer composite. Strength, heat deflection temperature, and impact performance are all increased while offering little increase in the density of the material. The increase in impact performance is due to interfacial failure between the polymer tapes/fibers and the matrix material around them. This is a failure mechanism which does not exist in virgin unreinforced polymers as obviously there are no tapes/fibers and no interfacial bonds, and thus the materials react as they traditionally would. As with all fiber reinforced composites, these materials gain their properties by transferring loads from the relatively low property matrix material into the high performance reinforcement fibers. Due to the very high level of molecular orientation within the reinforcements of self-reinforced polymer composites resulting from high draw ratios (up to 20 or more for polypropylene), the tape/fiber reinforcement within these materials has vastly higher properties than the unmodified material. Due to this, more traditional failure mechanisms such as tensile failure are delayed due to the transmission of load from the matrix to the tape/fiber reinforcement.
The foregoing and other objectives, features, and advantages of the invention may be more readily understood upon consideration of the following detailed description of the invention, taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
1
FIG.
illustrates a self-reinforced polymer composite.
2
FIG.
illustrates another self-reinforced polymer composite.
3
FIG.
illustrates a textile.
4
FIG.
illustrates a sheet of self-reinforced polymer.
5
FIG.
illustrates a sheet of self-reinforced polymer, an adhesive film, and a textile.
6
FIG.
illustrates an adhered stack of a sheet of self-reinforced polymer, an adhesive film, and a textile.
DETAILED DESCRIPTION
1
FIG.
Referring to , one manner of manufacturing the Self-Reinforced Polymer (“SRP”) composites includes hot compaction. Hot compaction is a method by which highly oriented polymer tapes are accurately heated. This heating allows approximately 10% of the polymer tapes to melt. With the application of pressure this molten polymer flows throughout the lattice work of tapes to form a continuous matrix. The sheet is then cooled while still under pressure to solidify the matrix. This process results in a rigid sheet which can then be thermoformed.
2
FIG.
Referring to , another manner of manufacturing the SRP composites includes co-extrusion. Highly oriented polymer tapes are extruded from a high melting point grade of the chosen polymer. During this process, a low melting point grade of the same family of polymers is extruded on the surface of the tape. These tapes can then be woven to form a fabric. During post processing into shaped components the outer layer of the tapes melts before the inner core of oriented polymer. Under pressure this low melt grade flows throughout the fabric. On cooling this low melt grade of polymer re-solidifies to form the composite matrix.
Other techniques may be used to form SRP fabrics.
One type of preferred SRP material includes a woven thermoplastic composite material, of a tape yarn construction, that provides impact resistance and stiffness while having a light weight. Some types of self-reinforced composites and/or polymers may use other types of construction, including for example, crystal extrusions, and traditional thread. The woven thermoplastic composite material preferably includes a multi-layer construction, with an outer layer preferably having a melt point at a lower temperature than a core material sandwiched therein. The multiple layers of the fabric are stacked together and heat and pressure are applied to form a substantially rigid, impact resistant, material. For example, a homogenous glue may be coated on a fiber or tape, and then the fiber or tape is woven together, and then the layers of the fabric are composited through heat and pressure. Some types of the material, for example, may be constructed from a tape with a tensile modulus of 10 GPa or more, a shrinkage at 130 degrees C. of 6% or less, a sealing temperature of 120 degrees C. or more, and/or a denier of 900 or more. A single layer of the fabric preferably has a thickness of less than 1.0 mm. In general, self-reinforced polymeric materials (e.g., self-reinforced composite fabric) may be used, which may include one or more components, with the spatial alignment of the reinforcing phase in the matrix in 1D, 2D, or 3D.
By way of example, the woven thermoplastic composite material may start out a series of polypropylene (PP) films that form a tape yarn within a polymer matrix—for composite processing—before being woven into fabric. This is then pressed under heat and pressure to form a single piece approximately 0.005 inch (0.13 mm) that weighs just 0.02 lbs/sq.ft (0.11 kg/sq.m). Multiple layers are added depending on the desired thickness. The multi-layers are melted together. From there, the sheet can be formed into a variety of shapes using heat and pressure, depending on the mold. The end result contains no fragment-producing glass, unlike carbon fiber or various glass type structures has high impact resistance and retains strength from around +180 degrees F. down to −40 degrees F.
By way of example, the self-reinforced composite materials may include a density (kg/m3) of greater than 800, and more preferably greater than 900. By way of example, the self-reinforced composite materials may include a tensile modulus (GPa) between 3 and 35, and more preferably between 3 and 30. By way of example, the self-reinforced composite materials may include a tensile strength (MPa) of greater than 100, and more preferably greater than 125, and less than 500, and more preferably less than 400. By way of example, the self-reinforced composite materials may include a edgewise notched Izod impact strength at 20 degrees C. (J/m) of greater than 100 and less than 6000, and more preferably greater than 1250 and less than 5000. Also, hybrid SRC composite materials together with carbon or ultra-high molecular weight polyethylene (e.g., 3 to 8 million amu) may be used. By way of example, the UHMWPE powder grade GUR 4120 (molecular weight of approximately 5.0×106 g/mol) may be used to produce an isotropic part of the multilayered sample. The powder may be heated up to 180° C. at a pressure of 25 MPa in a stainless-steel mold to produce 80×10×2 mm3 rectangular samples, with fibers having an average diameter of 15 μm (e.g., 10-20 μm) and a linear density of 220 Dtex (e.g., 150-300 Dtex).
By way of example, Tegris thermoplastic composites (i.e., SRP) provide impact resistance and stiffness using three polymer layers in an ABA construction. The outer, or “A” layer melts at a lower temperature than the core “B” layer. To consolidate, multiple layers of fabric are stacked together and heat and pressure is applied to form a rigid, impact resistant material. For example, for the tape the tensile modulus is typically 14.0 GPa or more, the shrinkage (130 degrees C.) is less than 5.5%, the sealing temperature is 130 degrees C., and the denier is 1020 or more. For example, for the fabric the tensile has a peak load N of 720 or more, a peak load lbf of 160 or more, and an elongation at break (%) of 7.8 percent or less. The consolidated sheet typically has a bulk density of 0.78 or less, a thickness of 0.125 mm/layer, a tensile strength MPa of 200 or more, a modulus GPa of 5-6, an elongation at break % of 6 or more, and a flexural modulus GPA of 5-6.
The ability to join different components together is fundamental to the assembly of systems from multiple components. Unfortunately, it is known to be problematic to use adhesives to securely secure different SRP components together, especially due to the low surface free energy of the SRP component. This limitation is even more acute of an issue when attempting to securely adhere a fabric/textile to the SRP component.
3
FIG.
300
400
Referring to , to overcome the limitations of adhesives, fabric may be secured to the SRP material by sewing through the material stacked together with a strong thread around the perimeter of the SRP material and typically in a checkerboard pattern across the face of the material to maintain the fabric generally close to the SRP material in areas proximate the thread. Unfortunately, even with a relatively dense pattern of the checkerboard sewing pattern, the fabric material is not securely maintained in place across its face. When using one of hook and loop fabrics (e.g., Velcro®) secured to the SRP material (typically the loop), the other of the hook and loop is secured to a bag or other item (typically the hook), with the pair of fabrics being pressed together to form a connection. With the fabric only secured to the SRP material along the thread lines, the bag or other item will tend to sag and not be maintained in a secure location. Further, the thread tends to be heavy further weighing down the SRP material and fabric combination.
While a coating may be adhered to the SRP material, it is more preferable to directly adhere a fabric to the SRP material in a manner that provides a sufficiently strong bond. In general, it is desirable to adhere textiles to the SRP material, such as fabrics including woven and non-woven (films) fabrics, knit fabrics, veils, and/or scrims. By way of example, such textiles may be made from polyamide, polyester, polypropylene, polyethylene, Ultra HMWPE, etc.
With the surface energy of the SRP material being sufficiently low making it difficult to suitably adhere textiles to its surface, the surface of the SRP material may be optionally treated to increase its surface energy. The treatment may include a chemical treatment, which in addition to removing containments, increases the surface energy of the SRP material. An alcohol based product or a methyl ethyl ketone (C4H8O or CH3COCH2CH3) may be applied, such as using a roller, sponge, or cloth. The chemical treatment is then allowed a sufficient time to try prior to adhering a textile to its surface.
With the surface energy of the SRP material being sufficiently low making it difficult to suitably adhere textiles to its surface, the surface of the SRP material may be optionally treated to increase its surface energy. The treatment may further or alternatively include a corona treatment (e.g., air plasma) that receives a low temperature corona discharge plasma to impart changes in the properties of the surface. The corona treatment tends to increase the surface energy.
While the treatment of the surface of the SRP material tends to improve its ability to adhere to textiles, it is also desirable that the adhesive be in the form of a film, rather than a free flowing liquid, although a liquid may be used. The film tends to include an optimal matrix of adhesive that is flat, with predictable uniform characteristics, that may be trimmed to a suitable size. The film may include the same adhesive material on both sides, or have one type of adhesive on its first side and another type of adhesive on its second side. With different types of adhesives on each of the sides of the film, the film may be especially suitable for adhering to the SRP material on one side and especially suitable for adhering to the textile on its other side. By way of example, the film may be initially adhered to either the textile or the SRP material, then the combination of which is adhered to the other of the textile or the SRP material. Preferably, due to the temperature gradient between the SRP material (e.g., 230 degrees C.) and the fabric material (150 degrees C.), the film is adhered to the SRP material, and then the combination is adhered to the fabric. Alternatively, a sandwich structure may be formed and the stack of the SRP material, the film, and the textile may be adhered at the same time.
While the use of the surface treatment to the SRP material, if used, tends to improve the adherence characteristics of the SRP material, and the use of a film, if used, further tends to improve the adherence characteristics of the SRP material, the selection of the particular type of adhesive results in a sufficiently secure bond. Upon further reflection, it was determined that SRP materials are constructed from one of several different base materials, such as polyamide, polyethylene, UHMWPE, or polypropylene. To form a sufficiently strong adhesive bond to the SRP material, it was determined that the characteristics of the film should match that of the SRP material. For example, a polyamide based adhesive film should be used for SRP material having a polyamide base. For example, a polyethylene based adhesive film should be used for SRP material having a polyethylene base. For example, a polypropylene based adhesive film should be used for SRP material having a polypropylene base. Upon further reflection, it was determined that having similar chemical characteristics of the adhesive film and the SRP material results in a sufficiently strong bond.
5
FIG.
6
FIG.
6
FIG.
500
510
520
510
500
520
Referring to and , a SRP material , a film , and a textile are trimmed to an appropriate size, then the film is used to adhere together the SRP material to the textile , as illustrated in . As it may be observed, the textile is adhered to the SRP material across its entire surface, thereby maintaining a secure bond between the textile and the SRP material.
The terms and expressions which have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding equivalents of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow. | |
About this Research Topic
After nervous system injury one major goal of neurological rehabilitation is to recover sensorimotor function. For intact sensorimotor function proprioceptive information from the body’s periphery is known to be essential . Yet, the processing of proprioceptive signals is often compromised after traumatic brain injury and stroke, or it becomes increasingly impaired in neurodegenerative diseases such as Parkinson’s disease.
This constitutes a major road block for neurorehabilitation. Because these patients are unable to use proprioceptive information, it impedes their learning or relearning of such basic functions like balance or the fine motor control of their hands. Thus, to regain motor control it is essential to reestablish the neural loops involved in sensorimotor integration and more specifically those devoted to proprioceptive-motor processing.
Within the framework of motor relearning and the restoration of motor function, the advent of robotic devices for neurorehabilitation affords new opportunities to aid and enhance the learning environment and to promote proprioception-based motor learning for patients affected by proprioceptive-motor dysfunction. Human-machine interfaces hold the great potential to promote their functional independence in a wide range of motor disabilities. This potential can be further enhanced by endowing such interfaces with the ability to deliver customized sensory stimuli that aid and enhance proprioceptive processing in humans. Although the next decade will see an increased use of robots in neurorehabilitation, there are still numerous issues that will require attention before such robots will see widespread use in clinical rehabilitation settings such as determining the optimal dosage and timing of such interventions for specific motor disease entities and identifying the underlying mechanisms of neuroplasticity and their limits.
The purpose of this interdisciplinary research topic is to provide a venue to discuss the current state of knowledge on proprioceptive dysfunction and its impact on motor behavior, to determine the relevant knowledge gaps and technological challenges, to identify the necessary lines of future research and, finally, to develop a framework of how new robotic rehabilitation techniques can help to overcome current barriers in treating patients who experience sensorimotor dysfunctions associated with proprioceptive loss.
To comprehensively cover this emergent interdisciplinary area, this Research Topic seeks contributions from experts with diverse backgrounds in biomedical, mechanical and control engineering, haptics, human movement science, neurology, neuroscience, physical therapy, physiology and psychology. To establish a full state-of-the-art of the research in and around this topic we welcome articles covering Original Research, Methods, Hypothesis & Theory and Reviews.
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review. | https://www.frontiersin.org/research-topics/2353/proprioceptive-dysfunction-related-motor-disorders-and-their-neurological-robotic-rehabilitation |
Some patients initially have no symptoms of MDS, and abnormal results from routine blood tests may be the earliest signs of the disease. For patients with symptoms, it is common not to feel well because of the lack of normal, healthy blood cells.
Anemia is a decrease in the number of healthy red blood cells. Red blood cells carry oxygen throughout the body. Anemia may cause symptoms such as
- Fatigue
- Dizziness
- Weakness
- Shortness of breath during normal physical activity
- Headache
- Rapid or irregular heartbeat
- Pale skin
Neutropenia is a decrease in the number of healthy white blood cells. White blood cells help the body fight infection. Neutropenia can lead to patients having fevers and frequent or severe infections. | https://www.lls.org/myelodysplastic-syndromes/signs-and-symptoms |
Q:
Limits for the solution of the non-linear ODE
Consider the ODE
$$y''+y'+y^3=0$$
I need to prove that $$\lim_{x\rightarrow \infty} y(x) = 0$$
and $$\lim_{x\rightarrow \infty} y'(x) = 0.$$
Well, introducing the change of variables such as $x_1=y,x_2=y'$ I get the system of equations nonlinear in $x_1, x_2$. My question is, if I linearize this system around $(0,0)$ and analyze the behaviour of the linearized system there, would I be correct to infer that the behaviour is the same for a nonlinear (original) system? Say, for a solution to the linearized system the limits above hold true. Would they hold true for the original system as well then?
A:
Here is one possible way to derive the result: the related ODE $y'' + y^3 = 0$ can be though of as describing the path of a particle that moves in a potential $V(y) = \frac{1}{4}y^4$. This ODE is the equation of motion of the Lagrangian $L = \frac{1}{2}y'^2 - \frac{1}{4}y^4$ for which the corresponding Hamiltonian (the energy of the system) is $H = \frac{1}{2}y'^2 + \frac{1}{4}y^4$. The energy is conserved under the evolution: $\frac{dH}{dx} = 0$. The addition of $y'$ in the ODE acts as a friction term which removes energy from the particle so we would expect the particle to eventually end up at the bottom of the potential with zero energy. This is the physical reasoning behind the result. To prove it we can multiply the ODE $y'' + y' + y^3 = 0$ by $y'$ and integrate to get an equation for the evolution of $H$:
$$y''y' + y'^2 + y^3y' = 0 \implies \frac{dH}{dx} = -y'^2 \implies H(x) = H(0) - \int_0^xy'(t)^2{\rm d}t$$
This shows that $H$ is decreasing with $x$ and since it's bounded below by $0$ it must converge as $x\to \infty$ from which $\lim_{x\to\infty}y'(x) = 0$ follows. There is only one possible fixpoint $(y,y') = (0,0)$ of the corresponding dynamical system $(y,y')' = (y',-y'-y^3)$ for it to converge to so we must have $\lim_{x\to\infty}y(x) = 0$.
| |
Perhaps one of the most commonly asked questions in digital photography is around which file type to use when shooting – JPEG or RAW file format. Don’t worry if you don’t know much about these two formats or whether your camera supports them. My goal here is to help you understand what these two types are and help you pick the one that is right for you.
At the very basic level, both JPEG and RAW are types of files that the camera produces as its output. Most of the newer cameras today have both these options along with a few others like M-RAW, S-RAW, Large format JPEG, Small format JPEG, etc. – all of which determines the size of the final output file.
The easiest way to see which file formats are supported by your camera is to review your camera user manual – look for a section on file formats. Or you can go through the menu options of your camera and select Quality (for Nikon) or Image Quality (Canon) to select the file format.
Each file format has its advantages and disadvantages so choose the right option that works best for you. JPEGs are, in reality, RAW files that are processed in camera and compressed into that format. Some of the decisions the camera makes in processing the image may be difficult to change later, but the JPEG file sizes tend to be much smaller.
Let’s look at the advantages and disadvantages of both these file formats in greater detail.
The image on the left (above) was completely blown out because I was in the car and did not have any of my settings correct. But because I photographed in RAW I was able to salvage so much detail in the image. This would not have been possible with a JPG file.
Another old-school way to think about these two file types is as slides and negatives. JPEGs are like slides or transparencies and RAW files are like negatives. With JPEGs, most of the decisions about how the image will look are made before the shutter is pressed and there are fewer options for changes later. But RAW files almost always require further processing and adjustments – just like negatives.
Now that you understand the difference between RAW and JPEG images, deciding which one to use is dependent on a few different factors.
Some photographers are intimidated by RAW images. I was as well when I had just gotten started in photography because I did not know the true power of a RAW image. However, once I started photographing in RAW there was no going back.
Traditionally the two main issues with RAW files seem to be fading every day:
There is still the issue of write speed for your camera. If you focus on fast-moving subjects like wildlife or sports photography then perhaps write speed is a key factor in deciding whether to photograph in RAW versus JPEG. So for fast moving objects and/or wildlife and birding photos, JPEG may be a better choice.
Another thing to note is that most of the newer cameras have the ability to capture both JPEG and RAW images at the same time. But this takes up even more storage space and might not be the best use of memory. You are better off just picking one option and sticking with that.
I hope this was helpful in not only understanding the differences between RAW versus JPEG file formats but also in helping you decide which one to choose and why. So tell me, do you belong to the RAW or the JPEG camp?!
If you found this article helpful, be sure to Pin it for future reference.
You broke down these photography tips so well! For me, I would never consider shooting Jpeg again… but that’s because RAW just has so much flexibility and options!
This is such a great read for new and expert photographers alike! I remember when I got my very first canon digital rebel (like 15 years ago) and knowing to shoot in RAW, but not knowing why. Just that if the pros did it I was going to. But you really should understand why and you’ve written such a great answer to the question!
Yep! No brainer. I’m just surprised how many even pro-photographers are still using JPG
Awesome tips and photographer education! My favorite way to shoot is RAW. I thought it would be a scary transition but it was easier than I thought!
Really good article for photographers looking for more information on the benefits of RAW files! I’m RAW all the way too – but this artilce reminded me why I deal with those giant files, hahah!
Great article for photography education! Very informative and detailed!
I used to shoot JPEG a while ago but completely switched to raw the last few years and it has been the best decision. Gives me so much control in editing.
this is such an important photography tip! shooting raw is a game changer for sure
This is great education about the difference between the RAW and JPEG files! I work with JPEG files all the time for my online interior design projects. I rarely work with RAW files. If you’re good with Photoshop, you can edit the photos via the RAW files too. Just like I did with my few professional photos for my office makeover after took by a professional photographer to fix the lighting and fix some spots. It has to have lots of skills to do it. | https://karthikagupta.com/tips-for-choosing-between-raw-versus-jpeg-file-format/ |
Two of Data Explorer's mechanisms to control execution flow through a visual program are the Switch and Route modules. Switch allows you to switch between one or more inputs to drive a single output; Route is the inverse of Switch, having a single input that can be routed to zero, one, or more than one output. Switch is typically used to choose between different paths in a visualization program; for example, to pass an imported data field through either the Glyph module or through Isosurface, depending on user choice or characteristics of the data field itself. Route is typically used to turn off portions of the visualization program; for example, to turn off WriteImage or Export, or to prevent rendering to an image window unless the user chooses to create an image. Switch can be thought of as turning off portions of the visual program logically above Switch; Route can be thought of as turning off portions of the visual program logically below Route. Note that while Route turns off modules that receive its unused outputs, the Collect module is an exception: it runs even if some of its inputs have been turned off by Route.
Figure 19 shows an example of a Switch module controlling whether an Isosurface or a MapToPlane is passed to Image. In a simple data-flow execution model, this Switch module will be executed when its inputs are available (i.e., the results of the Isosurface and MapToPlane modules, and the value of the selector). On execution, the Switch module chooses whether to pass the Isosurface or MapToPlane result to the output based on the selection input to Switch. In the case of a pure data-flow model both the Isosurface and MapToPlane modules execute before the decision as to which will actually be used is known. Since both operations can be computationally expensive, the execution of both of them is very inefficient.
Again, this problem is handled in Data Explorer within the simple data-flow execution module by an examination of the graph prior to execution. If the selection value comes from an external source (e.g., an interactor) and is known a priori, the selection may be performed by a simple transformation of the graph: excising the Switch module altogether, and substituting arcs from the selected source (either Isosurface or MapToPlane) to each of the modules that, in the original network, received the result of the Switch module. This leaves the unselected module dangling. It and any of its ancestors that are therefore made unnecessary will not be executed.
A different procedure is used if the controlling value is not static (e.g., if it is determined elsewhere in the network), as shown in Figure 20. Suppose either an isosurface or a set of vector glyphs is selected depending on whether the data are scalar or vector. The determination of the type of the data is made using the Inquire module (i.e. at run time). In this case, the selection value for the Switch module cannot be determined before the execution of the graph. Instead, the graph must be evaluated in stages: 1) determine the selection value, 2) determine the necessary input to the Switch module and 3) evaluate the remainder of the graph. Since dynamic inputs may themselves be descended from other non-static inputs (e.g., computed in the network), this process may have to be performed repeatedly. | http://northstar-www.dartmouth.edu/doc/dx/pages/usrgu028.htm |
In the most basic sense, every cruise ship in operation today makes use of a diesel engine, which burns fossil fuels to produce electricity and powers the ship itself. This includes the lighting, the casino games, the propellers, and even the environmental systems like the desalinization.
The engine systems of typical cruise ships are almost entirely reliant on diesel fuel, which contains less than 0.1 percent sulfur. This fuel is also used to power the ship’s generators and boilers, which together make up the ship’s on-board power system.
What are the types of fuel used on ships?
Defining the many kinds of fuel that are utilized on ships 1 Options with a low sulphur content. The International Maritime Organization (IMO) has determined that the last reduction in the amount of allowable sulphur that may be found in fuels that are currently controlled under MARPOL Annex VI will take effect in the year 2020. 2 Distillate fuels. 3 Emulsified fuels.
What type of fuel oil is used to run a boat?
Heavy fuel oil is, without a doubt, the type of fuel oil that is used the most frequently to power commercial vessels. Because of the heavy fuel oil’s high viscosity, diesel is typically combined with it so that it may be floated. Propulsion systems powered by heavy fuel oil are used on more than 60,00 boats nowadays.
How many miles per gallon does a cruise ship use?
The size of the cruise ship affects the amount of gasoline that it uses.The majority of vessels have a typical consumption of between 30 and 50 miles per gallon of gasoline.In addition to this, this will also be affected by other aspects including fuel.The fuel efficiency of ships varies according to the kind and size of the ship, the number of people onboard, as well as a number of other parameters. | https://elitecruise.com.au/faq/what-fuel-do-cruise-ships-use.html |
Though COVID-19 has changed so much about life (telemedicine, anyone?), it hasn’t changed my need to get things done. The way work gets done might be a little different, but I still need to keep track of things, and focus on the projects that matter to me. However, tasks are much more likely to come to me via email, text, Teams, etc., and I am much more likely to be in front of a computer than I used to be. I thought an article about my Everyday Carry Setup (EDC) might be useful. Also, I added a new Apple watch to my EDC, which allows me to have reminders that are hard to ignore, and review my schedule and tasks right on my wrist! As a result, I’ve gone away from using a written planner, and moved to a digital calendar and task list, which helps me capture emailed tasks much easier.
Everyday carry:
- A Hobonichi techo planner and Pilot Acro Drive ball point pen, in theSuperior Labor’s A6 Peacock Blue Notebook cover: I’m using this to make daily notes, write down quotes that strike me, check a calendar at a glance, and keep a habit tracker. The Acro writes smoothly on the ultra thin Tomoe River paper, and the minimal aesthetic of the Hobonichi and interesting quotes inspire me. I had this planner anyway, and figured when I switched to digital, I might as well use it. I sometimes add little ephemera to my techo- the tiny drawings and treasures that my kids give me throughout the day. I’ve ordered one of the Remarkable 2 devices, but it doesn’t arrive until October, so until then, I need to carry some paper to take quick notes.
- Apple watch, series 5: I had a first generation Apple watch, that had become essentially non-functional. It recently disintegrated (really!), and I opted to upgrade the the Series 5. I am really glad I did- the larger face and updated technology allows me to see my schedule and tasks, and really make better use of this tool in a way I never did before.
- iPhone (of course): this is where a lot of my data entry happens on the go. I’ll talk about my app set up later.
- iPad and portable Bluetooth keyboard: I use this for data entry, writing longer emails and journal entries, and doing my morning ritual (if my MacBook Air isn’t available).
On my Apple devices:
- Calendars 5 by Readdle: I’ve recently been trying to go with apps that allow you to buy them outright, rather than a subscription model. I was using Fantastical 2 to look at my calendar, the weather, and my tasks in one glance, but this also required both a subscription for Fantastical and a subscription for Todoist. Todoist on its own did not have the weather, or the ability to see a calendar at a glance. I already had paid for the app Calendars 5, which also gives me the ability to enter dates in natural language which is much faster than a dropdown menu for me.
- Things 3: Things 3 is a beautiful app that can be as complicated or easy as you need. It also shows events for the day, and you can divide tasks between morning and evening to make the visual processing much easier. I’ve subscribed to a Weather calendar so I can see the weather in the events. Each morning, I review my calendar for the week, and my tasks for the week, and then manage my tasks for the day. I review what tasks are critical for the day, and times I have meetings and clinic. I divide tasks into daily and evening tasks to simplify my daily view. Also, I’ve created two important repeating tasks, which I drag to the top of my list to keep them in my mind throughout the day:
- A repeating task of my monthly goal: this month, it happens to be logging food, intermittent fasting, and exercising 4 times per week.
- a repeating task with Today’s Affirmation and Focus: today, it happens to be “I am mindful of the present moment.” This is a quote I am pondering or something I am striving to emulate for the day.
- Instapaper: I’m wavering between Pocket and Instapaper, but for now, I save studies and articles I am reading to Instapaper. I’ve used an IFTTT formula to save articles that I click “like” on to Evernote to save in case I want to refer to them later.
- Day One app: I use this for journaling, but I have also set up some templates based on the Stoics, that allow me to have an AM and PM reflective process, and let me see what I’m grateful for every day.
- Zero: I am using this to help me remember to do intermittent fasting.
What are you using for your daily carry and apps? Let me know in the comments below! | https://siliconsutra.com/tag/readdle/ |
FIELD OF THE INVENTION
The present invention relates generally to generating curves and, in particular, to a method for generating curves that can be directly implemented by curve forming devices. An associated method for operating a curve forming device is also disclosed.
BACKGROUND OF THE INVENTION
Considerable effort has been dedicated to CAD/CAM technology and, more specifically, to curve generating technology for implementing and driving curve forming devices, including graphic devices such as plotters and cathode ray tube "CRT" devices and machine tools such as milling machines, cutting machines and other computer numerically controlled "CNC" machines. Much of this effort has been directed to providing curve generating tools which are easy for a designer to use and yet allow flexibility for designing a variety of smooth, aesthetically pleasing curves, including both two-dimensional and three- dimensional curves or contours. Additionally, it is desirable to minimize the numerical processing complexity and computer resources associated with curve generation, and to increase processing speed.
Among the more popular curve generating techniques are the Bezier curve and B-spline methodologies. Referring to FIG. 11, the Bezier curve method illustrates some of the attributes and difficulties associated with conventional computer aided curve generation. In one application of the Bezier curve technology, a designer can generate a curve 1 by positioning two pairs of points P.sub.0 and P.sub.3, and P. sub.1 and P. sub.2, on a computer screen. The first pair of points P.sub. 0 and P.sub. 3 defines the end points for the curve 1. The other pair of points P.sub. 1 and P.sub.2 are control points for determining the shape of the curve 1 between the end points P.sub.0 and P.sub.3. Two line segments, P.sub.0, P. sub.1 and P.sub.2, P.sub.3 each connecting one of the end points P. sub.0 or P.sub.3 to one of the control points P.sub.1 or P.sub.2, are sometimes referred to as "handles". Additionally, these two pairs of points P.sub.0 and P.sub.3, and P.sub.1 and P.sub.2, can be considered as defining vertices of a polygon 2 commonly designated the "control polygon" which includes a base 3 defined by the end points P. sub.0 and P. sub.3 and sides 4, 5 and 6 defined relative to the control points P.sub. 1 and P. sub.2.
According to this method, the coordinates of the vertices of the control polygon 2 are used to calculate the coefficients of a polynomial that explicitly describes the designed curve 1. The curve 1 is then derived point by point from the polynomial. As a result, the total number of points involved can be restrictively large and unevenly distributed, thereby complicating derivation.
It is a characteristic of the designed curve 1 is that it is tangent to the handles P.sub.0, P.sub.1 and P.sub.2, P.sub.3 at the end points P. sub.0 and P.sub.3. An additional characteristic of the designed curve 1 is that the curve 1 lies within the convex hull of the control polygon 2, i.e., the hull-like shape defined by the sides 4, 5 and 6 of the control polygon 2. The shape of the designed curve 1 therefore mimics, to some extent, the shape of the control polygon 2, thereby aiding the designer. Furthermore, the designed curve 1 tends to be smooth and aesthetically pleasing.
Unfortunately, such a Bezier curve cannot be directly implemented by most curve forming devices. This is because the Bezier curve is generally a higher order polynomial and most curve forming devices are capable of forming only certain geometric shapes such as line segments, corners and circular arcs of known radius and length. In this regard, the existing controllers of most curve forming devices and software libraries of most graphical tools are line and arc oriented. Specifically, most controllers use numerical control languages, for example, DXF, TIFF, EIA, HP-GL and IGES, that are based on these simple outputs.
In order to drive conventional curve forming devices, the Bezier curve is normally translated into a series of lines or simple arcs which approximate the Bezier curve. This translation can be performed by first selecting a number of points along the curve to serve as segment end points and then defining line or arc segments to connect the end points. However, this translation is a time consuming process involving complex algorithms and substantial computing resources. In particular, providing the desired transitional smoothness at the points where individual segments join can be a computationally intensive process. Additionally, because the translated curve is an approximation of the Bezier curve it can vary significantly from the originally designed curve depending, for example, on the number of segments used to create the desired curve and the process for translating the higher order polynomial curve to segment instructions to drive the curve forming device. The resulting curve therefore may not match the curve originally selected by the designer. Consequently, the designer loses some degree of control over the final design which can result, for example, in improperly fitted parts of a final product.
SUMMARY OF THE INVENTION
The present invention provides a method for use in generating curves that can be directly implemented by conventional curve forming devices. The need to translate the designed curve into a segmented approximation thereof for implementation is eliminated and the invention allows for formation of curves, using conventional curve forming devices, that match the curve selected by the designer.
It is a particular advantage of the present invention, in a preferred form, that a designer can generate and implement a great variety of smooth, complex and aesthetically pleasing curves using simple input commands, e.g., by defining curve end points and one or more control points (stated differently, by defining a control polygon). Based on these inputs, a curve composed of arc segments, line segments or a combination of both ("segments") can be directly generated. Moreover, the resulting curve can be provided such that it is smooth at connections between contiguous segments. Expressed mathematically, the first derivative of the curve approaching such a connection or transition point from one side equals the first derivative of the curve approaching the connection from the other side. The curve can also be designed to avoid abrupt and displeasing changes in curvature (which is stepwise defined) or, mathematically, to minimize the maximum value of differences in the curve's second derivative. The curve as thus designed will also lie within the convex hull of the control polygon, be tangent to the originally defined control polygon at its end points and thereby mimic the control polygon to provide a simple and intuitive feel for the designer.
According to one aspect of the present invention, a curve is generated by first defining a polygon, which will be referred to in the following description as an "outbound polygon." The outbound polygon is defined relative to curve end points and at least one control point. Between the end points, the sides of the outbound polygon are defined such that at least two isosceles triangles can, in turn, be defined relative to the outbound polygon, wherein the sides of the isosceles triangles coincide with the sides of the outbound polygon. The isosceles triangles thus defined may be the same size or different sizes. The curve for interconnecting the end points is defined relative to these isosceles triangles. It will be appreciated that the outbound polygon differs from the control polygon discussed above which is described by the end points and control point(s).
By way of example, the case involving two end points and one intermediate control point may be considered (the "control triangle example"). The two end points and the control point form a control polygon, in this case a triangle, which includes a base connecting the end points and two sides, each formed by connecting an end point to the control point. A four sided outbound polygon can be defined relative to these points so as to include: a base which coincides with the control polygon's base; a first side which is collinear with one side of the control polygon; a second side which is collinear with the other side of the control polygon; and a third side connecting the ends of the first and second sides and having a length equal to the sum of the lengths of the first and second sides. In such an outbound polygon, the third side can be segmented at an intermediate point so as to define a first segment adjacent to the first side which is equal in length to the first side and a second segment adjacent to the second side which is equal in length to the second side. Two isosceles triangles are thus defined by the intermediate point and the vertices of the outbound polygon, i.e., a first isosceles triangle composed of the first side, the first segment and a triangle base interconnecting the ends of the two, and a second isosceles triangle composed of the second side, the second segment, and a triangle base interconnecting the ends of the two.
A variety of such outbound polygons can be formed relative to the curve end points and the control point(s). Preferably, the outbound polygon is defined so that the third side, as described in the example above, is parallel to the polygon base. As will be understood upon consideration of the description below, such a polygon yields a curve where abrupt changes in curvature are minimized or eliminated and, therefore, aesthetics are enhanced. Additionally, as set forth in detail below, a method analogous to that of the above described control triangle example can be employed for cases involving more than one control point. This is accomplished, for example, by first dividing the resulting control polygon to form more than one such control triangle, and then employing the described control triangle methodology with respect to each resulting control triangle. Other methods are possible, as discussed below, for addressing the case of more than one control point.
According to another aspect of the present invention, a curve connecting two curve end points is generated by defining a polygon, such as an outbound polygon as previously described, and forming the curve as arc segments fitted to the polygon wherein at least one end of the arc segments is located on a side of the polygon. The process involves selecting a location of at least one control point relative to two curve end points. The polygon is then defined relative to the curve end points and the control point(s). That is, the polygon depends in some manner on this set of points such that a variety of polygons can be achieved corresponding to various arrangements of these points, e.g., the polygon can be varied by moving the control point or points. The arc segments for forming the curve are defined relative to the polygon so that at least one end of the arc segments lies on a side of the polygon apart from the vertices thereof. Preferably, at this one end of the arc segments, the corresponding arc segment(s) is tangent to the side of the polygon.
Referring again to the control triangle example discussed above, a piecewise curve can be generated relative to the outbound polygon as two arc segments. The first arc segment extends from a first of the curve end points to the intermediate point and the second arc segment extends from the intermediate point to the second of the curve end points. Preferably, each arc segment is tangent to the third side of the outbound polygon at the intermediate point. In this manner, smooth conjunction of the arc segments at the connection therebetween is achieved. Additionally, the segments are preferably tangent to the polygon sides at the curve end points. The resulting curve thereby mimics the control polygon to some extent. Moreover, tangency of the curve to the control and outbound polygons at the curve end points facilitates smooth interconnection to further curves, i.e., such smoothness can be achieved by simply causing the control polygons of contiguous curves to have collinear sides at the connection point. Complex, smooth shapes can thereby be generated as a series of individual, smoothly interconnected curves.
In the control triangle example, an arc segment can be defined relative to one of the isosceles triangles as follows. A center of curvature and a radius of curvature are selected so that a circle thereby defined includes a curve end point and the intermediate point of the outbound polygon, i.e., the vertices defining the base of the isosceles triangle. The arc segment will then be that portion of the circle between the curve end point and the intermediate point. To ensure arc segment tangency to the outbound polygon, the center of curvature can be selected as the intersection of a first line perpendicular to the third side passing through the intermediate point, and a second line perpendicular to the outbound polygon side including the curve end point and passing through the curve end point. The circle drawn relative to this center of curvature will have a radius equal to the distance from the center of curvature to either of the intermediate point or the curve end point.
In one embodiment, the method of the present invention is used to operate a curve forming device to form a smooth, complex curve, i.e., a piecewise curve formed of arc segments of different curvature having a smooth connection between adjacent segments. The curve forming device is limited, e.g., due to the control system or control language employed, to forming shapes composed of segments selected from a set of geometric elements such as lines, corners and circular arcs. Examples of the types of curve forming devices which can be employed include plotters, milling machines, cutting machines, ink jet or robotic painting units for forming defined shapes, etc. According to the method, a control system having a visual interface is employed to obtain guidance information relative to end points and at least one control point for defining the curve. For example, a mouse, keyboard or other input device associated with a computer can be used to input coordinate information relative to the points, which coordinate information is then reflected on a computer monitor to provide a visual interface.
A smooth, complex curve composed of geometric elements selected from the element set of the curve forming device is then directly derived as a function of the end points and control point(s). This can be accomplished, for example, by using an outbound polygon or polygon side/arc fitting method as described above. The control system is interfaced with a curve forming device, e.g., via software for expressing information regarding the defined curve into a standard operating language format and/or appropriate linking hardware, so as to communicate guidance information to the curve forming device. The guidance information is employed by the curve forming device to form the desired smooth, complex curve. In this regard, the curve forming device may employ additional information, such as information relating to cutting tool offsets due to the swath of the tool, in conjunction with the guidance information for forming the curve.
It is an advantage of the present invention that curves are generated that can be directly implemented by conventional curve forming devices. It is a further advantage of the present invention that smooth, complex curves can be generated quickly and easily without requiring undue processing resources.
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the present invention and further advantages thereof, reference is now made to the following Detailed Description taken in conjunction with the drawings, in which:
FIG. 1 illustrates a desired design;
FIG. 2 illustrates a curve generated in accordance with the present invention in relation to a control polygon;
FIG. 3 illustrates a control polygon and an outbound polygon in accordance with the present invention;
FIG. 4 illustrates a method for generating a curve relative to the outbound polygon of FIG. 3;
FIG. 5 illustrates another method for generating a curve according to the present invention;
FIGS. 6-8 illustrate a further method for generating a curve according to the present invention;
FIG. 9 illustrates a still further method for generating a curve according to the present invention;
FIG. 10 is a flowchart of a method for operating a curve forming device according to the present invention;
FIG. 11 illustrates a prior art curve generating method; and
FIGS. 12a-12f show curves generated in accordance with a known method and in accordance with the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
The present invention discloses a method for generating curves which is particularly apt for use in connection with interfacing and operating curve forming devices. The present invention is described herein with respect to one such application; namely, operating a CNC machine tool for cutting a design from a workpiece. However, it should be appreciated that the curve generating method and design tool of the present invention is not limited to any such particular application.
Referring to FIG. 1, a partially completed design 10 is shown. In this case, the design 10 is an outline for the letter "a" in a custom or standard font. As will be appreciated from the description below, the present invention allows for substantially real time computer aided design. Accordingly, the design 10 may be displayed on a computer monitor.
It may be desired to produce such a design in connection with a variety of applications including plotting, image coding, printing, driving a CNC machine or other CAD/CAM applications. For purposes of the present description, the design 10 can be considered as defining cut boundaries for driving a machine cutting tool. For example, the tool can be used to cut a base or face plate of a letter shaped housing as is commonly employed for housing neon tubes in connection with neon signs. In such an application, it is desirable for the plate shape to be closely controlled so as to substantially match the shape of a corresponding side panel.
As shown in FIG. 1, for purposes of illustration, the solid lines represent the partially completed design 10 and the phantom lines represent the desired final design. Accordingly, it is desired to define an appropriate curve connecting end points P.sub.0, P.sub.n so as to complete the design, and thereby allow for generation of appropriate machine tool guidance information. As should be appreciated, the present invention would be employed to generate all curves in the intended design 10.
DESIGN OBJECTIVES
In a preferred implementation of the curve generating method of the present invention, a number of design objectives are taken into consideration. First, it is desirable to construct the curve from segments composed of geometric elements which can be directly implemented by a curve forming device. As previously noted, many numerical control languages used in operating curve forming devices utilize only line and circular arc oriented commands. Accordingly, in the preferred implementation of the present invention, the circular arc is taken as the basic constructive unit. The designed curve will thus comprise a piecewise curve of contiguous arc segments. In vector form, the circular arc is given by the equation:
c(&agr;)=a+r·e.sup.j&phgr; (1)
where a=(a.sub.x, a.sub.y) is a center of the arc, r is its radius and &phgr; is an angular parameter of the arc: &phgr;.sub.o ≦ &phgr;. ltoreq.&phgr;.sub.n
As a second preferred design objective, it is desired, for aesthetic and practical reasons, that the piecewise curve be smooth at all points, including at the connections between contiguous segments. In this regard, a piecewise curve of n segments can be defined as a continuous function of a parameter &phgr; (&phgr;.sub.o ≦&phgr;. ltoreq.&phgr;.sub.n) as follows: ##EQU1## where f.sub.k =1 for &phgr;.sub. k-1 < &phgr; < . phi..sub.k and f.sub.k =0 otherwise (&phgr; < &phgr;. sub.1 < . . . . phi..sub.k < &phgr;.sub.n).
Mathematically, the requirement of continuity at the connections between segments can then be expressed as: ##EQU2## for all k=1, 2, . . . , n. The subscripts + and - indicate the derivative values approaching a given point from the right and left respectively. Abrupt changes in curvature can be reduced or eliminated and enhanced aesthetics can be achieved by minimizing the maximum value of the second derivative deviation over the extent of the curve.
A further preferred design objective is to allow the designer to quickly and conveniently select or modify a curve in a manner which provides an intuitive appeal. Conveniently, the design process can be initiated by inputting design information in the form of curve end points and one or more control points, or otherwise defining a control polygon, using a mouse or other input mechanism. Referring to FIG. 2, an example employing two control points b.sub.1 and b.sub.2 in conjunction with curve end points b.sub.0 and b.sub.3 is shown. To provide the desired intuitive appeal, the designed curve 12 preferably has the following characteristics relative to the input information. First, the curve 12 preferably passes through curve end points b.sub.0 and b.sub.3 to allow for positive definition of the curve limits. Second, the curve 12 is preferably tangent to the control handles, i.e., the line segments b.sub. 0, b.sub.1 and b.sub.2, b.sub.3, at the curve end points b.sub.0 and b. sub.3. Third, the curve 12 preferably lies within the convex hull (the shape defined by segments b.sub.0, b.sub.1 ; b.sub.1, b.sub.2 ; and b.sub. 2, b.sub.3) of the control polygon defined by points b.sub.0, b. sub.1, b. sub.2 and b.sub.3. The curve 12 thus mimics the control polygon to provide an intuitive design feel.
GENERATING THE CURVE
These design objectives can be addressed according to the present invention by generating a curve based on an outbound polygon. As will be understood upon consideration of the following description, the outbound polygon, which is derived from the curve end points and control point or points, lies outbound of the defined curve and, more specifically, circumscribes the defined curve. Thus, in the preferred implementation of the invention, the method for forming a curve generally involves defining an outbound polygon and generating the curve based on the outbound polygon. For illustration purposes, the method will first be described with respect to the case of a single control point and then generalized for the case of more than one control point.
Referring to FIG. 3, a control polygon, in this case a triangle, defined by end points p.sub.0, p.sub.3 and control point p.sub.12 is shown. An outbound polygon p.sub.0, p.sub.1, p.sub.2, and p.sub.3, is defined relative to points p.sub.0, p.sub.12, p.sub.3 as follows. The base of the outbound polygon is defined by end points p.sub.0 and p.sub. 3. The sides of the outbound polygon are then defined to simultaneously satisfy the following conditions. A first side p.sub.0,p.sub.1 of the outbound polygon is defined such that it is collinear with side p.sub.0, p.sub.12 of the control polygon. A second side p.sub.2, p.sub.3 of the outbound polygon is defined such that it is collinear with side p.sub.12, p.sub.3 of the control polygon. The remaining third side p.sub.1, p.sub. 2 of the outbound polygon is defined such that it connects the polygon vertices at p.sub.1 and p.sub.2, wherein the length of side p.sub.1, p. sub.2 (designated ∥p.sub.1, p.sub.2 ∥) is equal to the sum of the lengths of sides p.sub.0, p.sub.1 and p.sub.2, p.sub.3, i.e., . parallel. p.sub.1, p.sub.2 ∥=∥p.sub.0, p.sub.1 . parallel.+∥p.sub.2, p. sub.3 ∥.
A point t.sub.1, can thus be defined on side p.sub.1, p.sub.2 such that the following conditions are satisfied:
∥p.sub.0, p.sub.1 ∥=∥p.sub.1, t.sub.1 . parallel. and
∥t.sub.1, p.sub.2 ∥=∥p.sub.2, p.sub.3 . parallel.
Stated differently, t.sub.1 in conjunction with the vertices of the outbound polygon define two isosceles triangles; namely p.sub.0, p. sub.1, t.sub.1 and t.sub.1, p.sub.2, p.sub.3, where the equal length sides of the isosceles triangles coincide with the sides of the outbound polygon.
It is an intrinsic property of the outbound polygon that a number of possible line segments p.sub.1, p.sub.2 exist which satisfy the conditions stated above. These possible line segments can be defined relative to the determination of x=∥p.sub.2, p.sub.3 ∥, using l=∥p.sub.1, p.sub.12 ∥ as a parameter as follows: ##EQU3## where L=∥p.sub.0, p.sub.12 ∥+. parallel.p.sub. 3, p.sub.12 ∥. FIG. 3 represents a general case.
Referring to FIG. 4, once the outbound polygon P.sub.0, P.sub.1, P.sub. 2, P.sub.3 is thus defined, it is possible to construct a continuous curve connecting end point P.sub.0 and P.sub.3 which is tangent to side p. sub.0, p.sub.1 at p.sub.0, tangent to side p.sub.2, p. sub.3 at p.sub.3 and tangent to p.sub.1, p.sub.2 at t.sub.1. As illustrated, this curve is composed of a first circular arc segment 14 connecting P.sub.0 and t.sub. 1 and a second circular arc segment 16 connecting t.sub.1 and p.sub.3. Defining these arc segments 14 and 16 involves determining a center of curvature and a radius for each segment. The centers of curvature 18 and 20 for the arc segments 14 and 16, respectively, can be defined relative to line 22, which is perpendicular to side p.sub.0, p.sub.1 and passes through p.sub.0, line 24, which is perpendicular to side p.sub.1, p.sub.2 and passes through point t.sub.1, and line 26, which is perpendicular to side p.sub.2, P.sub.3 and passes through point p.sub.3. Centers of curvature 18 and 20 are thus defined by the intersections of line 22 and 24 and of lines 24 and 26, respectively. The radius of arc segment 14 can be calculated as the distance from center of curvature 18 to either p.sub. 0 or t.sub.1. Similarly, the radius of arc segment 16 can be calculated as the distance from center of curvature 20 to either point t.sub.1 or p. sub.3.
From the foregoing, it will be appreciated that the resulting curve, composed of segments 14 and 16 is tangent to the handles p.sub.0, p.sub. 12 and p.sub.3, p.sub.12 at the curve end points p.sub.0 and p.sub. 3 in accordance with the stated design objective. In addition, cotangency of the arc segments 14 and 16 at the connection point t.sub.1, and, hence, smoothness of the curve, is ensured due to the tangency of each arc segment 14 and 16 individually, to side p.sub.1, p.sub.2 at t. sub.1. The resulting curve is thus circumscribed by the outbound polygon, contacts the outbound polygon at the three points p.sub.1, t.sub.1 and p. sub.3 and lies within the convex hulls of both the outbound polygon and the control polygon.
FIG. 5 illustrates a special case of the previously mentioned number of possible third sides, i e., side p.sub.1 ', p.sub.2 ', and the resulting outbound polygon. In this case side p.sub.1 ', p.sub.2 ' is defined so as to be parallel to base p.sub.0, p.sub.3 in addition to satisfying the outbound polygon characteristics noted above. In this regard, the location S of side p.sub.1 ', p.sub.2 ' relative to control polygon p.sub.0, p.sub.12 and p.sub.3 can be determined as: ##EQU4## Using the ratio S, points p.sub.1 ' and p.sub.2 ' can be thus determined as p.sub.1 '=p.sub.0 +S (p.sub.12 -p.sub.0) and p.sub.2 '=p.sub.3 +S (p. sub.12 -p.sub.3).
Arc segments 14' and 16' can then be produced using the same method as described above. That is, center of curvature 18' is defined as the intersection of line 22' drawn perpendicularly to side p.sub.0, p. sub.1 ' at p.sub.0 and line 24' drawn perpendicularly to side P.sub.1 ', P.sub. 2 ' at t.sub.1, and center of curvature 20' is defined as the intersection of line 24' and line 26' drawn perpendicularly to side p. sub.2 ', p.sub.3 at p.sub.3. This special case of side p.sub.1 ', p.sub. 2 ' and the corresponding outbound polygon has the advantage of providing minimal curvature deviation as between the resulting arc segments 14' and 16', thereby satisfying another stated design objective.
Having thus described the method for generating curves of the present invention with respect to the basic case of a single control point, the method will now be generalized with regard to the use of more than one control point, e.g., two control points. Two such methods will be described as exemplary of the many possibilities. Each of these methods is generically applicable to cases involving more than two control points.
Referring to FIGS. 6-8, a first method for addressing the two control point case is described by reference to a control polygon defined by curve end points b.sub.0 and b.sub.3 and control points b.sub. 1, b. sub.2. According to this method, a point t.sub.2 on side b.sub.1, b. sub. 2 is first selected as shown in FIG. 6, e.g., in such a way that side b. sub.1, b.sub.2 is divided proportionally to the lengths of the adjacent sides of the control polygon. Lines can then be drawn from each of the end points b.sub.0 and b.sub.3 to point t.sub.2, thereby dividing control polygon b. sub.0, b.sub.1, b.sub.2, b.sub.3 into two triangles b. sub.0, b.sub.1, t. sub.2, and t.sub.2, b.sub.2, b.sub.3. The previously described methodology can then be employed to define an outbound polygon for each of the resulting triangles, i.e., outbound polygons p.sub.0, p. sub.1, p. sub.2, t.sub.2 and t.sub.2, p.sub.3, p.sub.4, p.sub.5, thereby defining overall outbound polygon p.sub.0, p.sub.1, p.sub.2, p.sub.3, p. sub.4, P. sub.5. Points t.sub.1 and t.sub.3 can then be defined, as previously described, on sides p.sub.1, p.sub.2 and p.sub.3, p.sub.4, respectively, thereby defining a series of isosceles triangles: p.sub.0, p.sub.1, t.sub. 1 ; t.sub.1, p.sub.2, t.sub.2 ; t.sub.2, p.sub.3, t.sub.3 ; and t.sub.3, p.sub.4, p.sub.5.
Referring to FIG. 7, centers of curvature and radii can be determined as described above for four arc segments which collectively form a curve connecting curve end points b.sub.0 and b.sub.3. Specifically, a first arc segment connecting points p.sub.0 and t.sub.1 is defined by center of curvature a.sub.1 and radius r.sub.1 ; a second arc segment connecting points t.sub.1 and t.sub.2 is defined by center of curvature a.sub.2 and radius r.sub.2 ; a third arc segment connecting points t.sub.2 and t.sub. 3 is defined by center of curvature a.sub.3 and radius r.sub.3 ; and a fourth arc segment connecting points t.sub.3 and p. sub.5 is defined by center of curvature a.sub.4 and radius r.sub.4. As shown in FIG. 8, these arc segments thus form a continuous curve 28 connecting end points b.sub. 0 and b.sub.3 which generally mimics the original control polygon b.sub.0, b.sub.1, b.sub.2, b.sub.3. It will be appreciated that for cases involving more than two control points, the control polygon can similarly be divided into a number of triangles which, in turn, can be used to define corresponding outbound polygons.
Referring to FIG. 9, a second method for addressing the case of more than one control point is illustrated. For a given control polygon, this method generates a family of curves of different orders, i.e., wherein the associated outbound polygon has different number of sides and, consequently, the curve is formed from a different number of arc segments. As the order of the curve is increased (the number of segments used to form the curve is increased), the curve converges on a Bezier curve defined by the same control polygon. Additionally, the resulting curve automatically provides the minimal divergence of curvature as between the arc segments forming the curve.
The family of curves is parameterized with an integer parameter n (n =2, 3, . . . ) which defines the curve order. For each interval i (wherein i=1, 2, . . . , n) the parameters s=(i-1)/n and q=i/n are defined and two lines are constructed. These two lines are used to define two segments of the outbound polygon sides. By repeating this process for i=1, 2. . . n, the complete outbound polygon is constructed.
As shown in FIG. 9, the geometric construction for the case of n=4 and i=2 is shown for purposes of illustration. A first line and outbound polygon segment end point are determined by:
1) defining point d.sub.1 by dividing side b.sub.0, b.sub.1 by factor s such as d.sub.1 =b.sub.0 +s(b.sub.1 -b.sub.0);
2) defining point d.sub.2 by dividing side b.sub.1, b.sub.2 by factor s such as d.sub.2 =b.sub.1 +s(b.sub.2 -b.sub.1);
3) defining point d.sub.3 by dividing side b.sub.2, b.sub.3 by factor s such as d.sub.3 =b.sub.2 +s(b.sub.3 -b.sub.2);
4) defining point d.sub.4 by dividing d.sub.1, d.sub.2 by factor s such as d.sub.4 =d.sub.1 +s (d.sub.2 -d.sub.1);
5) defining point d.sub.5 by dividing d.sub.2, d.sub.3 by factor s such as d.sub.5 =d.sub.2 +s (d.sub.3 -d.sub.2)
6) drawing line 1, L.sub.1, by connecting points d.sub.4 and d. sub.5 ; and
7) defining segment end point t.sub.1 by dividing d.sub.4, d. sub. 5 by factor s such as t.sub.0 =d.sub.4 +s(d.sub.5 -d.sub.4).
A second line and outbound polygon segment end point are determined by:
1) defining point g.sub.1 by dividing side b.sub.0, b.sub.1 by factor q such as g.sub.1 =b.sub.0 +q(b.sub.1 -b.sub.0);
2) defining point g.sub.2 by dividing side b.sub.1, b.sub.2 by factor q such as g.sub.2 =b.sub.1 +q(b.sub.2 -b.sub.1);
3) defining point g.sub.3 by dividing side b.sub.2, b.sub.3 by factor q such as g.sub.3 =b.sub.2 +q(b.sub.3 -b.sub.2);
4) defining point g.sub.4 by dividing side g.sub.1, g.sub.2 by factor q such as g.sub.4 =g.sub.1 +q (g.sub.2 -g.sub.3);
5) defining point g.sub.5 by dividing side g.sub.2, g.sub.3 by factor q such as g.sub.5 =g.sub.2 +q(g.sub.3 -g.sub.2);
6) connecting points g.sub.4 and g.sub.5 to form line 2, L.sub. 2 ; and
7) defining segment end point t.sub.2 by dividing g.sub.4, g. sub. 5 by factor q such as t.sub.2 =g.sub.4 =q(g.sub.5 -g.sub.4).
Lines L.sub.1 and L.sub.2 are then intersected at p.sub.2 so as to define shaded triangle t.sub.1, p.sub.2, t.sub.2 and outbound polygon segments t.sub.1 , p.sub.12 and p.sub.22, t.sub.2. By repeating this process for each of i=1,2 . . . n, a complete outbound polygon is constructed. Finally, the curve is generated relative to the completed outbound polygon as previously described. It will be appreciated that this technique can be employed for any number of control points.
OPERATING THE CURVE FORMING DEVICE
The curve generated as described above is composed of circular arc segments and is therefore amenable to implementation by curve forming devices employing conventional control languages which understand only arc and line oriented commands. Accordingly, it is unnecessary to translate the curve into a form amenable for such implementation. Rather, the curve data can be directly converted to a CAD/CAM standard format and downloaded to the controller of curve forming device.
In the plate cutting embodiment of the present invention, the design data, which is ordinarily comprised of a series of interconnected curves, is provided to the controller of a CNC cutting machine. The machine then cuts the appropriate shape based on the design data in conventional fashion, with appropriate provisions made for offsets due to cutting swath and the like.
FIG. 10 shows a flow chart of a method for operating a curve forming device according to the present invention. The method is initiated in step 100 by selecting end points for the curve. Where the curve forms a portion of an overall design (see FIG. 1), the end points may be determined by adjacent design portions. In step 105, one or more control points are entered thereby defining a control polygon. Because the resulting curve mimics the control polygon to some extent, an experienced designer will acquire an intuitive feel for positioning the control points. A curve generating method as described above is employed in step 110 to generate a curve based on the end points and the control point(s).
The designer can then view the resulting curve on a computer monitor (step 115) and determine whether the curve is acceptable (step 120) based on predetermined design considerations. As indicated in FIG. 10, the control points can be repositioned if the curve is unacceptable. It will be appreciated that, assuming adequate processing speed, this feedback process can be accomplished substantially instantaneously, for convenient iterative and interactive design definition. Once an acceptable curve is achieved, in step 125 the curve is expressed in a standard numerical control language, such as DXF, TIFF, EIA, HP-GL or IGES, using known techniques. Corresponding design information is then provided to the controller of a curve forming device such as an NC cutting machine in step 130. Finally, in step 135, the curve forming device is operated to form the curve based on the design information.
FIGS. 12a-12f show the results of curve production using a known technique and using the present invention. All of the Figures were produced using MICROSOFT WORD 6.0.
FIGS. 12a and 12b illustrates a curve produced using the commercially available CORELDRAW product. This product allows for curve derivation using a Bezier curve technique and subsequent translation into a standard operating language format for CAD/CAM applications. The illustrated curve is a piecewise line segment approximation of a Bezier curve translated to DXF format as is common for operating CNC machines and graphic devices. In this illustrated example, the translated FIG. 12a curve is composed of 32 line segments and therefore requires a relatively large number of control commands for implementation. FIG. 12b shows a portion of the curve of FIG. 12a magnified by a factor of 6 and includes 8 line segments. As is apparent, despite the relatively large number of segments employed, the resulting curve is not smooth. This curve production technique can therefore be problematic for certain applications.
FIG. 12c shows a similar curve (of order 2) produced according to the present invention employing two control points. As shown, the resulting curve, which is composed of circular arc segments, is smooth. The appearance of the curve does not change when the curve is expressed in a standard CAD/CAM format. Moreover, implementation of the curve requires minimal control commands; four commands for arc production in this case.
FIGS. 12d-12f illustrate similar curves of order 4, 20 and 40, respectively, produced according to the present invention. Also shown are the control polygon and the radii connecting the centers of curvature for each arc with the arc ends. As shown, each of the resulting curves is smooth. Moreover, the locus of the curvature centers of the arcs define a smooth line. It will be appreciated that the elegant appearance of the construction geometry is not merely fortuitous but is a reflection of the stated design objective of avoiding abrupt changes in curvature. In practice, low order curves are sufficient for many applications.
While various embodiments of the present invention have been described in detail, it is apparent that further modifications and adaptations of the invention will occur to those skilled in the art. However, it is to be expressly understood that such modifications and adaptations are within the spirit and scope of the present invention. | |
John Hare OBE FRGS, explorer, conservationist and author was, in 1957, the very last recruit into Her Majesty‚ Overseas Administrative Service in Northern Nigeria.
Kenya
Later John Hare worked in Kenya for the United Nations Environment Programme (UNEP). During this time he undertook a number of expeditions into remote parts of northern Kenya, travelling all the time with camels and frequently alone. This kindled a life-long passion for camels.
The Wild Camel
In 1993, he took advantage of a chance offer from a Russian scientific team to research the status of the wild camel in Mongolia – the 8th most endangered large mammal in the world. The wild camel is a critically endangered species numbering no more than 1000, and only survives in the Gobi desert in China and Mongolia. Presenting his research findings in 1994 at an international conference in Ulaan Baator, John Hare received, in 1995, permission to enter the former nuclear test site of China. where the wild camel survives. No foreigner had been allowed to enter this vast salt water desert for 45 years. It is here that the wild camel, having survived 43 atmospheric nuclear tests, is also able to tolerate salt water with a higher salt content than sea water.
The Gobi Desert – China’s Nuclear Test Site
In 1995 and 1996 John Hare became the first foreigner to cross the Gashun Gobi Desert in China from north to south and to reach the ancient city of Lou Lan from the east. John Hare‚ his team discovered a hitherto undiscovered outpost of Lou Lan called Tu-ying on the Middle Silk Road. In 1999, on another expedition mounted on camels, John Hare’s team discovered two unmapped valleys deep in the Gobi sand dunes, which contained wildlife that had never seen or experienced man.
The Wild Camel Protection Foundation and the Lop Nur Wild Camel National Nature Reserve
In 1997, John Hare with Kathryn Rae founded the Wild Camel Protection Foundation (WCPF), a UK registered charity‚ Dr. Jane Goodall DBE is the Life Patron‚ and having raised funding, put forward proposals with the WCPF co-managing trustee, Kathryn Rae, for the establishment of the Lop Nur Wild Camel National Nature Reserve in Xinjiang Province in the former nuclear test site, to which the Chinese government agreed. Measuring 155,000 square kilometres and almost the size of Bulgaria or Texas, the WCPF became responsible for helping the Chinese to establish one of the largest nature reserves in the world, protecting not only the wild Bactrian camel but many other IUCN Red Book listed endangered fauna and flora. John Hare is the sole international consultant for the Reserve.
Crossing the Sahara
In 2001/2002 Hare crossed the Sahara Desert from Lake Chad to Tripoli, a journey of 1500 miles, which lasted three-and-a-half months, to raise awareness for the wild camel. This route had not been followed in its entirety by a foreigner since Sir Hans Vischer negotiated it 100 years earlier. This journey was undertaken to raise funding and awareness of the plight of the wild Bactrian camel.
Around Lake Turkana (Rudolph) with Camels
In 2006 John Hare made the first recorded complete circumambulation by camel of Lake Turkana (Rudolph), in Kenya. At the northern tip of the lake this involved swimming 22 camels across the fast-flowing River Omo in Ethiopia. In 2005 and 2006 made two more expeditions into the Chinese and Mongolian Gobi desert, on domestic Bactrian camels.
The Captive Wild Camel Breeding Centre in Mongolia
In 2004 the WCPF established the Hunter Hall Captive Wild Camel Breeding Centre at Zakhyn Us in Mongolia with twelve wild camels, which had been captured by Mongolian herdsmen. This is the only place where the wild camel is held in captivity apart from two zoos in China and in 2010 the population had increased to twenty-five. With advice from the Zoological Society of London (ZSL), there is a plan to undertake the first release of the captive wild camels back into the Gobi desert. | https://www.johnhare.org.uk/ |
Skills:
Mandatory (Strong design, admin and development skills):
- 7+ years of experience working with installing, maintaining and developing mobile apps and SDK.
- Strong knowledge of Android SDK, different versions of Android, and how to deal with different screen sizes
- Should have published mobile apps in the android play store.
- Familiarity with REST & JSON to connect Android applications with remote back-end services
- Strong knowledge of Android UI design principles, patterns, and best practices
- Must have cross-platform Mobile application development experience with Xamarin, React Native etc
- Experience with offline storage, threading, and performance tuning
- Good Knowledge in handling the BLE/NFC handshakes
- Ability to design applications around natural user interfaces and controls such as “touch”
- Familiarity with the use of additional sensors, such as gyroscopes and accelerometers
- Knowledge of the open-source Android ecosystem and the libraries available for common tasks
- Ability to understand business requirements and translate them into technical requirements
- Familiarity with cloud message APIs and push notifications
- A knack for benchmarking and optimization
- Understanding of Google’s Android design principles and interface guidelines
Other – Desired experience:
- Excellent exposure to Java, J2EE & Restful Web services
- Good grasp of Cloud Technology Stack, Preferably Amazon Web Services.
- Good knowledge on Agile Methodologies like Scrum, Kanban and XP Practices.
- Exposure to CI tools like Jenkins, Team City is highly desirable.
- Previous experience in TDD, BDD practices is preferred
- Comfortable to learn new technology stack as per the business needs. | https://in.hacendo.com/job/339521657/lead-mobility-engineer-android-chennai/ |
Bulimia nervosa following psychological and multiple child abuse: support for the self-medication hypothesis in a population-based cohort study.
To unravel the complex role of child abuse as a risk factor for bulima nervosa (BN), from the perspective of the self-medication hypothesis which asserts that in abused BN cases binge eating is primarily a way of coping with the anxiety or mood disorders that stem from the abuse. In a population-based study (N = 1,987) DSM-III-R diagnoses were assessed with the CIDI. Differences in exposure rates to child abuse between BN cases versus healthy, psychiatric, substance use, and dual diagnosis controls were employed to test the self-medication hypothesis. A history of psychological or multiple abuse was found to be a specific risk factor for dual diagnosis disorder (cases with psychiatric and substance use disorders) and for BN. Nearly all BN cases that experienced multiple or psychological child abuse, showed such comorbid anxiety or mood disorders. We found tentative support for the self-medication hypothesis.
| |
New study shows THC Could Treat Deadly COVID Infections
Preliminary research out of Canada has already shown some promise for CBD in the treatment of severe coronavirus infection. Now, a new study out of the University of South Carolina shows THC could also be of benefit.
From The State:
The studies, co-published by Prakash Nagarkatti, found THC, the most potent mind-altering chemical in cannabis, can — in mice — prevent a harmful immune response that causes Acute Respiratory Distress Syndrome (ARDS) and cause a significant increase in healthy lung bacteria.
The studies, published in Frontiers in Pharmacology, the British Journal of Pharmacology and the International Journal of Molecular Sciences, were conducted by giving mice a toxin that triggered the harmful immune reaction that causes ARDS and then injecting mice with THC, according to the studies’ abstracts.
The research here involved three separate studies and dozens of experiments. 100% of the mice given THC survived. The success rate was so high that the study's lead researcher recommended trials begin in humans. | https://calmarijuanapolicy.org/2020/09/new-study-shows-thc-could-treat-deadly-covid-infections.html |
The controversial issue of transparency between brand marketers and their advertising agencies appears to be at, or very near, a tipping point — but the journey has been slow and torturous.
At the heart of the long-simmering dispute has been the extreme lack of transparency around media buying and the nature of the relationship between agencies and the brands they (are supposed to) serve. This was detailed at length in the ANA/K2 Intelligence report released two years ago. But transparency is just a symptom, says attorney Douglas Wood, a partner in Reed Smith LLP and the leader of the firm's advertising and marketing law practice. "The real issue is trust, because the law can only do so much," he says.
At a certain point, contracts designed to enforce transparent practices on agencies serving clients can become "overbearing," says Wood, who is also the ANA's general counsel. When that happens, he says the contract "doesn't help build a relationship, it's actually destructive to relationships. That's not an outcome anyone wants, but the less you trust someone, the more you need to rely on the contract to provide a substitute for trust." | https://www.warc.com/content/paywall/article/ana/transparency_20_where_things_stand_two_years_after_the_blockbuster_ana_report/122979 |
UNDERGRADUATE PROGRAM OF LAW
- The assessment of each course includes 4 (four) components:
|Presence||10%|
|Task (Independent & Group)||20%|
|UTS (Mid Semester Exam)||20%|
|UAS (Final Semester Exam)||45|
|TOTAL||100%|
- Credit system learning, the learning period is the odd semester, even semester and intensive semester. Face-to-face learning activities and structured assignments. The exam consists of the Mid-Semester Examination (UTS), the Final Semester Examination (UAS) and the Comprehensive Examination (thesis proposal presentation and thesis defense presentation). Assessment of passing the exam is as follows:
|Grade||Range||Grade Point|
|A||80 – 100||4|
|A-||78 – 79||3,7|
|B+||74 – 77||3,3|
|B||70 – 73||3|
|B-||65 – 69||2,7|
|C+||60 – 64||2,3|
|C||55 – 59||2|
|C-||50 – 54||1,7|
|D+||45 – 49||1,3|
|D||40 – 44||1|
|E||< 40||0,00|
|TL||Tidak Lengkap / Incomplete (If one of the components is empty)||–|
- Study monitoring is conducted every semester. Monitoring is carried out by the Head of Study Program and faculty leaders assisted by PA lecturers. The minimum academic achievement per semester is a GPA of 2.00 and 12 credits per semester. Students are threatened with dropping out of study if their achievement is below the minimum standard. Students under the minimum achievement are given a probationary semester until the third and / or fourth semester.
- Students are declared to have dropped out of study at the end of semester 4 or 8 or 14 because they do not meet the minimum academic achievement. A certificate of having attended college is given to students who drop out of study / resign. Resignation can be submitted by students at any time as long as it is still within the study period.
- Incentive Semester (SI) is held 1 (one) year and 1 (one) time on even semester lines for 1 (one) full month. S1 as a means of improving grades (repeating) for courses that have been taken with passing or not passing results. The number of credits at S1 is limited to no more than 16 (sixteen) credits. The maximum value on Incentive Semester is B+.
- Study Load and Study Period
a. Terms of credit load and study period
|1||Maximum number of credits per semester||24 credits|
|2||Minimum credits load per course||2 credits|
|3||Maximum credits load per course||4 credits|
|4||The minimum limit for the number of semesters for the undergraduate program||8 semesters|
|5||Maximum limit of the number of semesters of the Undergraduate Study Program (excluding academic leave)||10 semesters|
|6||Maximum limit of academic leave||4 semesters|
|7||The total minimum number of credits for the Undergraduate Study Program of Law||246 credits|
b. Provisions for the number of credits taken each semester are based on the IP (Achievement Index) of the latest KHS (Study Result Card). | http://hukum.upnvj.ac.id/academic-guidelines/ |
Medina’s oldest coffee shop, serving award winning coffee, pastries, salads, wraps and paninis.
Cups Cafe
Where Everything’s Free.Teenagers today are searching for a place to go and someone who will listen. Cups is a place where young people can go
Second Look Books & Brews
Second Look Books & Brews Quaint used bookstore and full service coffee bar located in historic downtown Lodi. | https://www.visitmedinacounty.com/where-to-eat/coffee-shops/ |
Intermittent fasting is an eating pattern where you cycle between periods of eating and fasting.
Numerous studies show that it can have powerful benefits for your body and brain.
Here are 10 evidence-based health benefits of intermittent fasting.
1. Intermittent Fasting Changes The Function of Cells, Genes and Hormones
When you don’t eat for a while, several things happen in your body.
For example, your body initiates important cellular repair processes and changes hormone levels to make stored body fat more accessible.
Here are some of the changes that occur in your body during fasting: | https://returnhealth.com/why-intermittent-fasting-is-good/ |
Why an evidence-driven approach is the best way for businesses to support workplace mental health
Mental health has never been higher on the agenda for businesses. It is easy to see why, as even prior to COVID-19, anxiety and depression were estimated to cost the global economy over $1 trillion every year in lost productivity. The exodus from offices in 2020 has presented further challenges and raised big questions about future ways of working.
With the global corporate wellness market forecast to reach $66 billion by 2022, many employees will be familiar with the range of workplace mental health initiatives that a growing number of businesses offer their staff – from yoga and mindfulness to flexible working. But despite the prevalence of different approaches, we’re yet to understand what works, for who, and why.
The absence of a deep and robust evidence base for approaches to supporting workplace mental health is a problem and can lead to well-intentioned businesses making critical and sensitive decisions in the dark. At best, such interventions are working and we just don’t know why or, at worst, they could be causing harm to workforces…
The 6 Dimensions of a Winning Resilience Strategy
CIOs increasingly realize they must prepare for the unexpected. In this view, resilience is no longer about risk mitigation, it’s about adaptability and effortlessly reacting to the next major disruption.
Credit: tadamichi via Adobe Stock
Every chief information officer takes resilience seriously. However, it’s become clear over the disruption of the past year that there are two ways of looking at resilience.
One is to think of it as preparing for the worst. Here, CIOs understand that black swan events can, in theory, happen and that they need to put in place fall-back plans and mitigations to help their businesses survive temporary upheaval. | https://resiliencereporter.com/2021/05/16/the-6-dimensions-of-a-winning-resilience-strategy/ |
The notion of an American Dream can be boiled down to a simple concept: a meritocracy in which place of origin and social status do not preclude success for hard workers.
Talk of that dream fading has been present since the Great Recession sucked 9 million jobs out of the economy and knocked down already-depressed wages for millions.
Now, a study published by the Federal Reserve Bank of St. Louis has found a way to measure that decay. It does so by coming up with a simple, mathematical definition of the American Dream as represented by social mobility defined as "the probability that a child born to parents in the bottom fifth of the income distribution makes the leap all the way to the top fifth of the income distribution."
Calculated in this manner, the chances of achieving the American Dream are nearly twice as high in Canada as they are in the US.
In the US, children born to parents in the bottom fifth of the income distribution have a 7.5% chance of reaching the top fifth, according to Stanford's Raj Chetty, the paper's author.
For the UK, that figure is 9%, while Danish children at the lower rung of the income ladder have an 11.7% chance of climbing to the top. In Canada the figure goes as high as 13.5%.
While those differences might seem fairly small, Chetty explains why they are actually pretty huge.
"When some people initially see these numbers, they sometimes react by saying, 'Even in Canada, which has the highest rates of upward mobility, the rate of success doesn't look all that high. You only have a 13.5% chance of reaching the top if you start out at the bottom,'" Chetty writes.
"It is important to remember that, unfortunately, no matter what you do, you can't have more than 20% of people in the top 20%. As such, these differences are actually quite large."
Upward mobility also varies a great deal within the US, Chetty adds, as the map makes clear.
| |
Some weeks ago, I was studying some papers and talks by Jack Ng. I am going to share with you what I have learned from them, about the wonderful subject of the “quantum spacetime foam”.
Some topics and ideas I will cover: holographic cosmology, MONDian Dark Matter and Dark Energy, plus some other uncommon subjects.
Firstly, let me quote J.A.Wheeler:
“Probed at the smallest scales, spacetime appears to be very complicated”
Something akin in complexity to a turbulent froth, that Wheeler himself dubbed “spacetime foam”.
The big idea of quantum spacetime and quantum gravity is that spacetime itself could be “foamy-like” and “emergent” on very small scales. It fits both the classical and quantum theories somehow, since
Even spacetime topology could change at quantum level! But how large are spacetime fluctuations? How foamy-like is the quantum spacetime?
A simple argument based on known theories provide interesting answers. On general grounds, we expect certain uncertainty in the measurement of any distance l (we could call this the principle of the microscopes: we alter spacetime when we observe it):
On average, we expect the above bound, some kind of “generalized uncertainty principle” (GUP). There, is the Planck length, defined by
and is some parameter related to the theory of quantum gravity we consider. Models of quantum gravity provide different values of , typically when . However, it could be possible that or as well! That is, we can build theories with , where . Therefore, the scale where quantum gravity appears can generally be , and it could even be closer to the electroweak/quark scale. At least from these simple arguments it could. Of course, there are many other arguments that make such a claim more unlikely, but I want to be open minded at this point.
Therefore, one of the most general GUP (Generalized Uncertainty Principles) in Quantum Gravity provide a general bound
Two accepted theories provide similar bounds.
1) Quantum Mechanics. The Margolus-Levitin theorem states an upper bound for any computational rate in terms of the available energy as follows
It is a very well understood bound in the world of Quantum Computing.
2) General Relativity. In order to prevent black hole formation, we must impose , i.e., the size of any system should be greater than the Schwarzschild radius or black holes (BH) arise!
since
from general relativity.
Therefore, we will see that quantum fluctuations of spacetime and spacetime topology are inevitable from almost any Quantum Gravity approach. In fact, there are some recent speculations about holographic foam cosmology and Mondian Dark Matter that I will explain you as well.
In the rest of this post, I will generally use natural units , and several ideas will emerge:
1) The critical role of the critical energy density of the Universe.
2) The seemingly inevitable existence of Dark Matter (DM) and Dark Energy (DE).
3) Non-locality and dark energy in the framework of “infinite statistics”.
4) MOND/Cold Dark Matter duality.
Suppose that is the accuracy with which we can measure any length .
1st Thought (Gedanken) experiment
Suppose there is some mass with size/diameter equal to , and we “probe” it at distances . Therefore, from the previous arguments, we get:
i) Quantum Mechanics.
and thus, neglecting the first term in the right-handed side, we get
ii) General Relativity.
From
we obtain (up to a factor 2)
or
Now, we combine i) and ii) multiplying the two above results, and it reads
Therefore, we get the nice result
from the QM+GR combination!
2nd Thought (Gedanken) experiment
The Universe has 3 dimensions of space and one single time dimension. It fits observations. The holographic idea states that all the information in the spacetime can be coded into a 2d surface. The maximum information in any region of spacetime is bounded by the area of that region. That is the holographic principle in the formulation of G.’Hooft, Susskind, Bousso,…). The numbers of degrees of freedom (DOF) is bounded according to the expression
in units!
The origin of this holographic principle rests in the physics of BH. The BH entropy is proportional to the horizon area, not to volume as we would expect from a naive Quantum Field Theory approach. That is . More precisely,
Let me point out that I consider this last equation as one of the most beautiful equations of current theoretical physics.
Now, divide a small cube from a given volume with infinitesimal size smaller cubes. Assume that there is 1 DOF per small cube. Then,
Then, the holographic principle provides
Therefore, the holographic principle provides this holographic GUP!
3rd Thought (Gedanken) experiment
Take any GPS and a good clock. How accurate can these clocks be to map out a spacetime volume with radius L in a given time ? Note that the clock has a mass M.
The answer is quite straightforward. A spacetime “volume” has dimensions . Using the Margolus-Levitin theorem,
Thus, we have
To prevent BH formation, , and then
The requirement of maximal spatial resolution implies that the clocks tick only ONCE, and every cell occupies a volume
And using the geometric mean to average the above spatial separation, taking the cubic root we get
That is, we recover the holographic GUP !
Remark: Maximal spatial resolution requires maximal density or packing
Maximal spatial resolution yields the holographic principle for bits, i.e.
However, IF we spread cells over both space and time, the temporal resolution should be expected to be similar to the spatial resolution and it should imply the average spatial separation over the whole spacetime (we should not only average over space!). Then, the result would be
and it gives the time separation of two succesive ticks, that is, . The interpretation of this alternative scenario is similar to that of a random walk model! I mean,
is similar to the expected time ticks in a random walk. That is, the time to communicate with neighbours (closest cells) is larger than the bound provided by holographic arguments.
From experiments, we could compare these two scenarios studying the incoherence of photons received from distant galaxies. The idea is as follows. The spacetime fluctuations could allow the loss of coherence in photon propagation! The size of these fluctuations can be measured with same phase , and then
In fact, the measured from PKS1413+135 provides
It shows that the holographic idea works and, in fact, that our above random walk model is ruled out! In fact, is not much lesser than the random walk bound.
Indeed, it is expected that the Very Large Telescope Interferometer will do reach a resolution , and, in principle, we could be able to prove or rule out the holographic idea! Therefore, quantum gravity and the spacetime foam are testable!
Spacetime foam and Cosmology
The Universe can be thought as some kind of computer. It perform operations and the software are the known physical laws. In fact, from basic Statistical Mechanics:
Therefore, the number of bits should be proportional to
If we know the amount of information in a given volumen and the Hubble radius, we can calculate the distance between the closest cells/bits to be . It implies that there is about to communicate those regions.
If we known the energy density and the Hubble radius, we can use the ML theorem about to estimate the computational rate, it gives .
If we know the information in a given volumen (or surface) and the operation rate, it implies that a single bit glips one every according to the above arguments!
Remarks:
(1) Ordinary matter maps out spacetime with accuracy corresponding to a random walk model.
(2) A random walk model is ruled out by current experiments, and then, spacetime can be mapped out finer with holographic ideas! This idea parallels what we know from Astronomy, Astrophysics and Cosmology: there exists unconventional (“dark”) matter/energy with better microscopy to map out the spacetime geometry and topology!
(3) The observed critical cosmic density could support the holographic idea. This is an attractive idea that we must test with further experiments and observations.
The spacetime foam idea implies some kind of holographic foamy Cosmology. There are two main quantities, the Hubble constant H and the Hubble radius . Moreover, we have the following interesting quantities
a) Critical energy density:
b) Bits of information in the whole Universe:
c) Average energy per bit:
d) Dark energy. It acts like some kind of (dynamical?) cosmological constant over very large scales:
Dark energy carries some class of long-wave particles (bits?) and a really tiny amount of energy, since
Moreover, the critical density behaves as
If we define the scale facter to be , we know that , and for radiation like fluids is not enough to account for recent/present acceleration of the Universe in the cosmic expansion. Therefore, dark energy/the cosmological constant/vacuum energy seems to exist!!!! It has an equation of state like .
The point with Dark Energy is the following. Suppose that there are some “dark” interactions between dark energy and dark matter (DE/DM). I could give transitions between accelerated and decelerated phases of the cosmic expansion! In fact, from Cosmology, the Universe has suffered three stages after the inflationary phase:
1st. Radiation dominated stage:
2nd. Matter dominated stage:
3rd. (Current era) Lambda/Dark energy dominated stage:
See you in my next spacetime foam post!
Current average ratings. | http://www.thespectrumofriemannium.com/2013/09/16/log131-spacetime-foami/?shared=email&msg=fail |
Neil Rhodes is an occasional lecturer in the Computer Science and Engineering department at UC San Diego and formerly a staff software engineer at Google. Most recently, he was one of the lecturers at UCSD Summer Program for Incoming Students (spis.ucsd.edu), as well as at the UCSD Summer Academy for transfer students (academy.eng.ucsd.edu). He’s taught Algorithms at the undergraduate and graduate level, as well as classes in Machine Learning, Operating Systems, Discrete Math, Automata and Computability Theory, and Software Engineering. As well as teaching at UC San Diego, he’s also taught at Harvey Mudd College. Mr. Rhodes holds a B.A. and M.S. in Computer Science from UCSD. He left the Ph.D. program at UC San Diego to found a company, Palomar Software, and spent fifteen years writing software, books on software development, and designed and taught programming courses for Apple and Palm.
Courses
Algorithmic Toolbox
Advanced Algorithms and Complexity
مربع الأدوات الخوارزمية
Algorithms on Graphs
Data Structures
Algorithms on Strings
Genome Assembly Programming Challenge
Other topics to explore
Arts and Humanities
338 courses
Business
1095 courses
Computer Science
668 courses
Data Science
425 courses
Information Technology
145 courses
Health
471 courses
Math and Logic
70 courses
Personal Development
137 courses
Physical Science and Engineering
413 courses
Social Sciences
401 courses
Language Learning
150 courses
Coursera Footer
Start or advance your career
Google Data Analyst
Google Project Management
Google UX Design
Google IT Support
IBM Data Science
IBM Data Analyst
IBM Data Analytics with Excel and R
IBM Cybersecurity Analyst
IBM Data Engineering
IBM Full Stack Cloud Developer
Facebook Social Media Marketing
Facebook Marketing Analytics
Salesforce Sales Development Representative
Salesforce Sales Operations
Intuit Bookkeeping
Preparing for Google Cloud Certification: Cloud Architect
Preparing for Google Cloud Certification: Cloud Data Engineer
Launch your career
Prepare for a certification
Advance your career
Browse popular topics
Free Courses
Learn a Language
Python
Java
Web Design
SQL
Cursos Gratis
Microsoft Excel
Project Management
Cybersecurity
Human Resources
Data Science Free Courses
Speaking English
Content Writing
Full Stack Web Development
Artificial Intelligence
C Programming
Communication Skills
Blockchain
See all courses
Popular courses and articles
Skills for Data Science Teams
Data Driven Decision Making
Software Engineering Skills
Soft Skills for Engineering Teams
Management Skills
Marketing Skills
Skills for Sales Teams
Product Manager Skills
Skills for Finance
Popular Data Science Courses in the UK
Beliebte Technologiekurse in Deutschland
Popular Cybersecurity Certifications
Popular IT Certifications
Popular SQL Certifications
Marketing Manager Career Guide
Project Manager Career Guide
Python Programming Skills
Web Developer Career Guide
Data Analyst Skills
Skills for UX Designers
Earn a degree or certificate online
MasterTrack® Certificates
Professional Certificates
University Certificates
MBA & Business Degrees
Data Science Degrees
Computer Science Degrees
Data Analytics Degrees
Public Health Degrees
Social Sciences Degrees
Management Degrees
Degrees from Top European Universities
Master's Degrees
Bachelor's Degrees
Degrees with a Performance Pathway
Bsc Courses
What is a Bachelor's Degree?
How Long Does a Master's Degree Take?
Is an Online MBA Worth It?
7 Ways to Pay for Graduate School
See all certificates
Coursera
About
What We Offer
Leadership
Careers
Catalog
Coursera Plus
Professional Certificates
MasterTrack® Certificates
Degrees
For Enterprise
For Government
For Campus
Become a Partner
Coronavirus Response
Community
Learners
Partners
Developers
Beta Testers
Translators
Blog
Tech Blog
Teaching Center
More
Press
Investors
Terms
Privacy
Help
Accessibility
Contact
Articles
Directory
Affiliates
Learn Anywhere
© 2021 Coursera Inc. All rights reserved. | https://www.coursera.org/instructor/~46748?authMode=login |
Acquiring an infection during a hospital stay is a hazard for patients throughout the world. Over 1.4 million people worldwide are suffering from infections acquired in hospital. Five to ten per cent of patients admitted to modern hospitals in developed countries acquire one or more infections, whereas patients in developing countries have a higher risk, around two to twenty times this figure. Paediatric patients, especially neonates and infants, have an additional risk of infection because of their compromised immune system. The purpose of this study was to explore the factors which contribute to the spread of infection among children in paediatric wards in a developed and a developing country: England and Thailand. Method: An ethnographic approach was utilised to identify practices which promote or prevent the spread of infection in each country. Purposive sampling was employed to recruit ten nurses in England and ten nurses in Thailand. Ethical approval was obtained from De Montfort University (DMU), National Research Ethics Service and the ethical approval committee in Thailand. Nonparticipant observations and semi-structured interviews were the main methods of obtaining data in clinical settings. Data from the observations and interviews were transcribed and coded using thematic content analysis. Results: Hospitals in Thailand and England faced the same problems regarding attitudes, values and beliefs which contribute to infection control difficulties in children, particularly poor hand hygiene. Good attitudes and beliefs will promote good practice. Moreover, education and training can raise perceptions and promote good practice. However, in terms of different cultures and circumstances, the key factors explaining different implementations between the two countries are resources, lifestyle, and religion. Conclusion: Even within the same hospital, different backgrounds including education, cultures, policies and support result in different factors which impact on paediatric patients. Individuality and personal responsibility for infection control practice are the most significant factors influencing compliance with best practice. | https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.545417 |
One central idea of the maker movement is that we shouldn’t learn technology only for the sake of technology but to make something with it. Whatever you make, it can be described as a project and that’s why Project Based Learning (PBL) is so closely connected to maker culture. In this post we are going to talk about the meaning, criteria and benefits of successful project based programming education.
Project Based Learning isn’t a unified learning paradigm but rather a variety of different ideas and practices framed around the concept of making projects in education. It’s often contrasted to a view where learning is very theory-oriented and knowledge is presented as abstract rules without a hands-on context.
Some studies have compared a project-based learning group and more traditional one and learned that students in the PBL-group remember the content longer and have a deeper understanding of it (1). On the other hand it has been emphasized that project based learning requires new things from the students and from the teacher. It’s kind of risky business: If the project is successful, the benefits are great but e.g. lack of project management skills can lead to poorer outcomes as well. (2)
But the only benefit of project based learning isn’t remembering and understanding things better. If you build a weather station, it’s not just about learning about physics and programming through the project. You also learn to search information, collaborate and to deal flexibly with the surprises and setbacks. Learning of these kind of meta-skills is often missing from more traditional schoolwork.
Buck Institute of Education, a non-profit organization based in California, has developed a freely available and research based framework for High Quality Project Based Learning (HQPBL). Below are the six criteria they suggest. I give examples of how our learning service takes these points into account.
Intellectual Challenge and Accomplishment: “Do students investigate challenging problems, questions, and issues over an extended period of time? Do they focus on concepts, knowledge, and skills central to subject areas and intellectual disciplines?”
Coding is something new to most of our students. Students practice the programming concepts in tasks and creative exercises. Guided by our online-materials, students work individually or in pairs. This way students can proceed at their own pace and spend more time with the parts they find most challenging.
Authenticity: “Do the students engage in work that makes an impact on or otherwise connects to the world beyond school, and to their personal interests and concerns? Do they use the tools, techniques, and/or digital technologies employed in the world beyond school?
If the aim is to connect projects with students’ personal interests, it’s tempting to simply let the students “do what they want”. Nevertheless, if anything is possible, it’s hard to start making – especially if the programming concepts still feel a bit challenging. Creativity needs some kind of boundaries like examples of possible end-results and a recommended time-limit for completing the project. This kind of scaffolding supports project-work in Mehackit Atelier.
Public product: “Do the students share their work-in-progress with peers, teachers, and others for feedback? Do they exhibit their work and describe their learning to peers and people beyond the classroom?”
For example on our Music Programming track, we have a tradition of organizing an ending concert where students get to present their songs and give feedback to each other. Students return their results to our learning platform where it’s easy to give feedback and make peer assessment as well.
Collaboration: “Do the students work in teams to complete complex tasks? Do the students learn to become effective team members and leaders?”
For example on our Electronics and Programming track, we encourage working in pairs or groups of three. This teaches students group working skills and makes it possible to divide work based on student’s strengths and interests.
Project Management: “Do the students manage themselves and their teams efficiently and effectively throughout a multistep project? Do the students learn to use project management processes, tools, and strategies?”
Handling the creative freedom requires some project management skills, but the good thing is that these skills can be learned! In Mehackit Atelier, the first projects are short and guided. As the projects get more advanced, there’s more creative freedom included.
Reflection: “Do the students learn to assess and suggest improvements in their own and other students’ work? Do they reflect on, write about, and discuss the academic content, concepts, and success skills they are learning?”
In the teacher materials of Mehackit Atelier we encourage the teacher to have this kind of facilitating discussions with the students during the projects. At first it can mean just asking a couple of questions at the right situations. Why do you want to do this project? If the thing you planned doesn’t work, can you make a simpler version of this for the deadline?
In programming education, project based learning can help to connect the abstract concepts to meaningful and personal results. Besides learning about the programming concepts, there’s potential for learning to manage projects, collaborate with other students and reflect critically on one’s own learning process.
Penuel, W. R., & Means, B. (2000). Designing a performance assessment to measure students’ communication skills in multi-media-supported, project-based learning. Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans.
Thomas, J. W. 2000. Review on research of project based learning. San Rafael, CA: Autodesk Foundation. | https://mehackit.org/en/blog/why-to-connect-programming-education-with-something-meaningful-part-2/ |
The Sagrada Familia is a prestigious tourist attraction and landmark in Barcelona. It is a stunning work by the great architect Antoni Gaudí, but was never completed. Donations keep construction going and it is due to be completed by 2026. Construction of the Nativity facade and basement has been taken over by UNESCO as a World Heritage Site. Climb up one of the tall spires of the church to enjoy a bird's eye view of Barcelona.
Casa Batlló
4.6
/5
1,297 Reviews
UNESCO World Heritage-Cultural Site
Historical Architecture
Casa Batlló is a uniquely designed building in the “Illa de la Discordia” block of Barcelona. It is the visceral, representative work of the genius architect Antoni Gaudí in the most mature stage of his work. The walls of the entire building are covered with colored mosaics, with a roof that looks like fish scales, pillars like human bones and mask-shaped balconies, all full of magical colors, shapes and patterns. Each evening the lights illuminate the building and it becomes even more stunning as it reflects and shimmers in the light.
Casa Mila
4.6
/5
1,331 Reviews
Featured Neighbourhood
Casa Mila is located in the famous “Illa de la Discordia” in the heart of Barcelona. It is one of the masterpieces of the architect Antoni Gaudí. Its wave-shaped walls and chimney-topped rooftops are all emblematic of the building and the city it calls home. Casa Mila can be visited day or night. Visits in the day typically start from the famous roof terrace. If you prefer to take a trip at night, you can take the “Secrets of Casa Mila” night tour to learn obscure and interesting facts about the structure.
Park Guell
4.5
/5
1,229 Reviews
UNESCO World Heritage-Cultural Site
City Park
Park Güell is an artistic Garden of Eden outside paradise designed by the Catalan architect Antoni Gaudí. It is located on Carmel Hill in Barcelona. In addition to a variety of different forms of architecture, the park also has brightly colored mosaics in the shapes of different animals, such as colorful lizards guarding the gate. A visit to the park offers the opportunity to tour the former residence of Gaudí, located on the winding mountain road of Park Güell, where the architectural visionary lived a quiet life on the mountainside.
More Attractions in Barcelona Province
Barcelona Province Tours & Tickets
Top Activities
Day Tours
Attractions & Shows
Travel Services
Experiences
Transport
The Original Three Countries in One Day: France, Andorra, Spain from Barcelona
5
/5
74 Reviews
From
GBP82.9
Sagrada Familia Fast Track Tickets with Audio Guide
5
/5
2 Reviews
Available from Today
From
GBP23.61
Interactive Spanish Cooking Experience in Barcelona
5
/5
59 Reviews
From
GBP31.5
Hard Rock Cafe Barcelona Including Meal
5
/5
23 Reviews
From
GBP17.5
More Barcelona Province Tours & Tickets
Barcelona Province Weather
℃
|
℉
Today
Invalid date
6
°
Clear
6-15℃
Tomorrow
Invalid date
8
°
Partly Cloudy
8-15℃
Invalid date
Invalid date
6
°
Mostly Clear
6-16℃
Invalid date
Invalid date
5
°
Intermittent Clouds
5-14℃
You Might Also Like
Madrid
Barcelona
Benidorm
Malaga
Popular Attractions in Barcelona Province
Sagrada Familia
|
Casa Batlló
|
Casa Mila
|
Park Guell
|
Palau de la Música Catalana
|
Camp Nou
|
Barceloneta Beach
|
Picasso Museum
|
Barcelona Cathedral
|
Columbus Monument
|
Castell de Montjuïc
|
Port Olímpic
|
Guell Palace
|
Mercat de la Boqueria
|
Arc de Triomf
|
Casa Museu Gaudí
|
Poble Espanyol
|
Articket Barcelona
|
Plaça de Catalunya
|
Placa Espanya
Recommended Restaurants in Barcelona Province
Cerveceria Catalana
|
7 Portes
|
La Paradeta (Sacred House)
|
Botafumeiro
|
Quimet & Quimet
|
Teresa Carles
|
TICKETS
|
Mamarosa Beach Restaurant
|
La Cova Fumada
|
Wok Dao
|
La Flauta
|
Segafredo Espresso Barcelona
|
Cera 23
|
La Malandrina
|
The pepit
|
Bar Pinotxo
|
Alsur Cafe
|
Mosquito
|
Gaudim Restaurant Bar
|
Velodromo Bar
Popular Hotels in Barcelona Province
Catalonia Magdalenes Barcelona
|
Hotel 4 Barcelona
|
Axel Hotel Barcelona & Urban Spa- Adults Only
|
Tryp Barcelona Apolo Hotel
|
NH Collection Barcelona Podium
|
Vincci Maritimo Hotel Barcelona
|
NH Barcelona La Maquinista
|
Hotel Aranea Barcelona
|
Sir Victor Hotel Barcelona
|
Hotel Constanza
|
H10 Art Gallery 4* Sup Barcelona
|
Attica 21 Barcelona Mar
|
Eurohotel Barcelona Granvia Fira
|
BCN Urban Hotels Gran Ducat
|
Eurohotel Diagonal Port
|
Hotel Bagués Barcelona
|
NH Sants Barcelona
|
Amrey Sant Pau
|
Silken Ramblas Barcelona
|
Golden Tulip Barcelona
|
NH Diagonal Center
|
Hotel Abat Cisneros Montserrat
|
HCC St. Moritz
|
Eurostars Lex
|
Hotel Madanis
Popular Destinations
London
|
Bangkok
|
Shanghai
|
Singapore
|
Hong Kong
|
Kuala Lumpur
|
Manila
|
Beijing
|
Guangzhou
|
Paris
|
Bali
|
Shenzhen
|
Phuket
|
Edinburgh
|
Taipei
|
Tokyo
|
Dubai
|
Birmingham
|
Manchester
|
Seoul
|
Ho Chi Minh City
|
Amsterdam
|
Hangzhou
|
Macau
|
Chengdu
|
Phnom Penh
|
New York
|
Chiang Mai
|
Cebu
|
Jakarta
Related Links
All Articles
|
Show More Destinations
|
Show More Attractions
English (United Kingdom)
Languages
£
GBP
Top currencies
All currencies
About
About Trip.com
Service Guarantee
Careers
Privacy Statement
Other Services
Investor Relations
TripPLUS Rewards Program
PointsPLUS
Partners
List My Hotel
Contact Us
Website Feedback
Customer Support
Trip.com is part of Trip.com Group Limited, one of the world's leading providers of travel services.
Copyright © 1999-2019 Ctrip.com (Hong Kong) Limited. All rights reserved. | https://uk.trip.com/travel-guide/barcelona-province-9865/ |
"Turn off your mind, relax and float down stream."
-Tomorrow Never Knows by The Beatles.
Published on Mar 11, 2020 by Studio Zimoun
YouTube: Zimoun : Compilation Video 3.9 (2020) : Selected Works : HD 1920x1080px (22:15)
Using simple and functional components, Zimoun builds architecturally-minded platforms of sound. Exploring mechanical rhythm and flow in prepared systems, his installations incorporate commonplace industrial objects. In an obsessive display of simple and functional materials, these works articulate a tension between the orderly patterns of Modernism and the chaotic forces of life. Carrying an emotional depth, the acoustic hum of natural phenomena in Zimoun's minimalist constructions effortlessly reverberates.
Wikipedia: Zimoun
Zimoun (born 1977) is a Swiss artist who lives and works in Bern, Switzerland. As self-taught artist, he is most known for his sound sculptures, sound architectures and installation art that combine raw, industrial materials such as cardboard boxes, plastic bags, or old furniture, with mechanical elements such as dc-motors, wires, microphones, speakers and ventilators.
artist web site: Zimoun
YouTube channel: Zimoun
Wikipedia: Sound Sculpture
Sound sculpture (related to sound art and sound installation) is an intermedia and time based art form in which sculpture or any kind of art object produces sound, or the reverse (in the sense that sound is manipulated in such a way as to create a sculptural as opposed to temporal form or mass). Most often sound sculpture artists were primarily either visual artists or composers, not having started out directly making sound sculpture. | https://www.williamquincybelle.com/2020/05/zimoun-sound-sculpture.html |
The Principality has now inoculated 10,829 people with a first jab of ant-coronavirus vaccine, amounting to 28.5 percent of the total population. The number who have received the second jab rose to 8,538, or 78.84 percent of the those who received the first dose.
The number of Monaco residents who had received a first dose on Thursday, March 18, was 9,750, so the figure is up by 1,079 in the week to Thursday, March 25.
The Grimaldi Forum has a daily capacity of 600 jabs. | https://news.mc/2021/03/26/good-news-on-jabs-but-new-cases-jump/ |
The East Is Red
"The East Is Red", premiered in 1964, was on a much larger scale and higher artistic standards. Therefore, it had a greater influence on the people.
The dance epic was created on the initiative of Premier Zhou Enlai. Employing the art forms of song, dance and poetry, it depicted the arduous struggle of the Chinese people to achieve victory from past hardships. The team of directors and choreographers was led by Chen Yading and Zhou Weizhi. Both of them were artists and administrators in art and literature circles. There were 29 choreographers led by Zha Lie involved in this work and rehearsals for the dance epic lasted for months. The premiere of the work and the following performances were all held at the Great Hall of the People.
"The East Is Red" featured a close-knit structure, powerful presentation, exquisite designs and superb artists. In addition to large amounts of new work, the dance epic also included many excellent songs and dances, which were created after the founding of the People's Republic of China and were popular among the people. These selections were arranged into the whole work appropriately. Nearly all of China's best-known singers, dancers and musicians at that time participated in the show --a total of 3,000 performers, including some art troupes from outside Beijing.
"The East Is Red" described the Chinese people's revolutionary history. The performance also served as a review of China's development in song and dance, as well as a review of the achievements of artists after the founding of the PRC. It became the most significant art performance since 1949. In 1965, "The East Is Red" was adapted into an art film and received accolades from both home and abroad. Even today, three decades after the premiere of the film "The East Is Red", whenever it is staged, it receives a warm welcome from the Chinese people.
| |
This course will explore the rise and decline of Greek and Roman civilizations between the first millennium BCE and the first...
see more
This course will explore the rise and decline of Greek and Roman civilizations between the first millennium BCE and the first millennium CE. Specifically, it will focus on the political, economic, and social factors that shaped the development and maturation of these two Mediterranean civilizations during the period of classical antiquity and examine how they influenced the social and cultural development of later generations of Europeans. By the end of the course, the student will understand how these ancient Mediterranean civilizations developed and recognize their lasting influences on European culture. This free course may be completed online at any time. See course site for detailed overview and learning outcomes. (History 301)
Pick a Bookmark Collection or Course ePortfolio to put this material in or scroll to the bottom to create a new Bookmark Collection
Name the Bookmark Collection to represent the materials you will add
Describe the Bookmark Collection so other MERLOT users will know what it contains and if it has value for their work or teaching. Other users can copy your Bookmark Collection to their own profile and modify it to save time
Edit the information about the material in this {0}
Submitting Bookmarks...
Select this link to open drop down to add material Greece, The Roman Republic, and The Roman Empire to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Greece, The Roman Republic, and The Roman Empire
Select this link to open drop down to add material Greece, The Roman Republic, and The Roman Empire to your Bookmark Collection or Course ePortfolio
Historical Methodology will introduce the student to historical research methods and familiarize the student with the tools...
see more
Historical Methodology will introduce the student to historical research methods and familiarize the student with the tools and techniques that historians use to study the past. The student will learn about the process of modern historical inquiry and gain a better understanding of the diverse resources that historians use to conduct research. The first four units will focus on research methodology and examine how and why historians conduct research on the past. Later units will examine how different historical resources can be used for historical research. By the end of the course, the student will have become familiar with a variety of physical and electronic resources available for historical research. This free course may be completed online at any time. See course site for detailed overview and learning outcomes. (History 104)
Select this link to open drop down to add material Historical Methodology: The Art and Craft of the Historian to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Historical Methodology: The Art and Craft of the Historian
Select this link to open drop down to add material Historical Methodology: The Art and Craft of the Historian to your Bookmark Collection or Course ePortfolio
The history of economic thought represents a wide diversity of theories within the discipline, but all economists address...
see more
The history of economic thought represents a wide diversity of theories within the discipline, but all economists address these three basic questions: what to produce, how to produce it, and for whom. The student will learn that without a clear sense of the discussions and debates that took place among economists of the past, the modern economist lacks a complete perspective. By examining the history of economic thought, the student will be able to categorize and classify thoughts and ideas and will begin to understand how to think like an economist. This free course may be completed online at any time. See course site for detailed overview and learning outcomes. (Economics 301)
Select this link to open drop down to add material History of Economic Ideas to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material History of Economic Ideas
Select this link to open drop down to add material History of Economic Ideas to your Bookmark Collection or Course ePortfolio
This course will introduce the student to the history of Europe from the medieval period to the Age of Revolutions in the...
see more
This course will introduce the student to the history of Europe from the medieval period to the Age of Revolutions in the eighteenth century. The student will learn about the major political, economic, and social changes that took place in Europe during this 800-year period, among them the Renaissance, the Protestant Reformation, European expansion overseas, and the French Revolution. By the end of the course, you will understand how Europe had transformed from a fragmented and volatile network of medieval polities into a series of independent nation-states by 1800. This free course may be completed online at any time. See course site for detailed overview and learning outcomes. (History 201)
Select this link to open drop down to add material History of Europe, 1000 to 1800 to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material History of Europe, 1000 to 1800
Select this link to open drop down to add material History of Europe, 1000 to 1800 to your Bookmark Collection or Course ePortfolio
This course will introduce the student to the history of Europe from 1800 to present day. The student will learn about the...
see more
This course will introduce the student to the history of Europe from 1800 to present day. The student will learn about the major political, economic, and social changes that took place in Europe during this period, including the Industrial Revolution, the First and Second World Wars, imperialism, and the Cold War. By the end of this course, the student will understand how nationalism, industrialization, and imperialism fueled the rise of European nation-states in the nineteenth century, as well as how world war and oppressive regimes devastated Europe during the 1900s. This free course may be completed online at any time. See course site for detailed overview and learning outcomes. (History 202)
Select this link to open drop down to add material History of Europe, 1800 to the Present to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material History of Europe, 1800 to the Present
Select this link to open drop down to add material History of Europe, 1800 to the Present to your Bookmark Collection or Course ePortfolio
This course serves as an introduction to American government and politics, covering theoretical underpinnings, interactions...
see more
This course serves as an introduction to American government and politics, covering theoretical underpinnings, interactions between public and government (elections, public opinion, public policy, etc.), the structure of government, and so forth. This course also serves as good preparation for further study in political science. This free course may be completed online at any time. See course site for detailed overview and learning outcomes. (Political Science 231)
Select this link to open drop down to add material Introduction to American Politics to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Introduction to American Politics
Select this link to open drop down to add material Introduction to American Politics to your Bookmark Collection or Course ePortfolio
This course introduces the history and practice of English as a scholarly discipline. After outlining basic approaches to the...
see more
This course introduces the history and practice of English as a scholarly discipline. After outlining basic approaches to the text, the course embarks upon a genre-study, devoting each of the four remaining units to a different genre of writing: poetry, the novel, drama, and rhetoric and the critical essay. This free course may be completed online at any time. See course site for detailed overview and learning outcomes. (English Literature 101)
Select this link to open drop down to add material Introduction to Cultural and Literary Expression to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Introduction to Cultural and Literary Expression
Select this link to open drop down to add material Introduction to Cultural and Literary Expression to your Bookmark Collection or Course ePortfolio
This course seeks to provide a basic understanding of foreign affairs and the fundamental principles of international...
see more
This course seeks to provide a basic understanding of foreign affairs and the fundamental principles of international relations within a political science framework. It will examine the theories of realism and liberalism, which will serve as the foundation for more advanced study in international relations and help students develop the critical thinking skills needed in order to analyze conflicts between states. Additionally, the course will explore issues that relate to the politics of global welfare, such as war, world poverty, disease, trade policy, environmental concerns, human rights, terrorism, the global distribution of wealth, the concept of the balance of power, and what happens in the international system when the balance of power collapses. This free course may be completed online at any time. See course site for detailed overview and learning outcomes. (Political Science 211)
Select this link to open drop down to add material Introduction to International Relations to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Introduction to International Relations
Select this link to open drop down to add material Introduction to International Relations to your Bookmark Collection or Course ePortfolio
This course introduces students to the major topics, problems, and methods of philosophy and surveys the writings of a number...
see more
This course introduces students to the major topics, problems, and methods of philosophy and surveys the writings of a number of major historical figures in the field. Several core areas of philosophy are explored, including metaphysics, epistemology, political philosophy, ethics, and the philosophy of religion. This free course may be completed online at any time. See course site for detailed overview and learning outcomes. (Philosophy 101)
Select this link to open drop down to add material Introduction to Philosophy to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Introduction to Philosophy
Select this link to open drop down to add material Introduction to Philosophy to your Bookmark Collection or Course ePortfolio
This course will introduce the student to United States history from the colonial period to the Civil War. The student will...
see more
This course will introduce the student to United States history from the colonial period to the Civil War. The student will learn about the major political, economic, and social changes that took place in America during this 250-year period. This free course may be completed online at any time. See course site for detailed overview and learning outcomes. (History 211)
Select this link to open drop down to add material Introduction to United States History: Colonial Period to the Civil War to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Introduction to United States History: Colonial Period to the Civil War
Select this link to open drop down to add material Introduction to United States History: Colonial Period to the Civil War to your Bookmark Collection or Course ePortfolio
| |
David believes that patient education is paramount to provide patients with a clearer understanding of their own condition. This approach bolsters patient engagement by providing them with a better understanding of their own medical condition, enabling patients to make more informed decisions regarding their own medical treatment. This concept encourages a more positive state of physical and mental wellness, resulting in better overall treatment outcomes. Enthusiastic about the prevention, diagnosis and treatment of multiple foot disorders, resulting from injury or disease I implement the necessary skills to treat conditions impacting on people’s everyday quality of life; whilst developing effective patient care pathways.
David is Originally from Liverpool, North West England. He obtained his Bachelor of Science degree in Podiatry at the University of Salford in Manchester. Prior to making the decision to study Podiatry and throughout his studies, David worked as a frontline Emergency Response Ambulance Technician. David also served in the British Army as a Combat Engineer and physical training instructor.
David enjoys a broad spectrum of podiatry work, including routine nail and foot care, biomechanics, plantar wart treatments and nail surgery. David has a keen interest in muscle, ligament, tendon and soft tissue injury rehabilitation.
David is happy to be working alongside a vascular surgeon and dermatologist. This collaboration helps to reduce NSHA wait times for some patients by freeing up Dr’s schedules whilst ensuring that patients have access to continuing foot care through their private health insurance policies should they wish.
In his spare time David likes to compete in various sporting disciplines but is at home in the water. He can often be found surfing, swimming or playing underwater hockey. David is a member of the Nova Scotia Podiatry Association. | http://www.vmedical.ca/reboot/ |
Most financial aid will be disbursed to a student's UAF account 10 days before the start of classes contingent upon certification by Financial Aid of the student maintaining satisfactory academic progress. For more information, visit the UAF Financial Aid website.
All aid received will be applied to any debt owed to the university first. Once all debt has been paid, the Office of the Bursar will refund any remaining balance. During the initial aid release, a high volume of refunds are processed in the order in which the aid has been applied. Remember: Any debt owed to the university will be deducted from your refund.
How can I receive my refund?
Refunds are processed one of three ways: direct deposit, check or credit card refund depending upon the method of original payment (if applicable). No cash refunds are available on amounts over $10, although they may be applied to your PolarExpress card. Please note that once applied, the funds are nonrefundable. For refunds of less than $10, the Office of the Bursar does not issue checks. It is the student's responsibility to check UAOnline and contact the Office of the Bursar to receive the refund as cash or it may be applied to the student's PolarExpress card as a nonrefundable payment. Due to increased volume during fee payment, this may take additional time. Direct deposit refunds for less than $10 are processed as normal.
When can I expect to receive my refund?
To determine if financial aid has been posted and a refund is due, students should check UAOnline for the most up-to-date information. If the "Current Amount Due" shows a negative balance, some or all of the aid has been posted and a refund will be processed. Once the "Current Amount Due" shows a zero balance ($0), the refund has been processed.
We make every effort to process refunds as quickly as possible, striving for 2-3 business days, but during peak times it can take up to two weeks depending on volume. Sign up for direct deposit to receive your refund as quickly as possible. Once processed, direct deposit refunds can take up to five business days to reach a student’s bank account; printed checks can take up to two weeks.
What if I add or drop a class after my refund has been processed?
If you make any changes to your account once a refund has been processed, you are responsible for paying the remaining balance by the fee payment deadline or holds and late fees may be assessed. See the UAF Registration Guide for more deadline dates.
Policies for specific refund types
Direct deposit
After the Office of the Bursar processes the refund, it takes up to five business days for the funds to be direct deposited into a student's designated bank account. If the account information is inaccurate and the refund cannot be processed, a check will be sent to the student's mailing address as indicated in UAOnline. It is the student's responsibility to ensure their mailing address is current. To see the account information for direct deposit, log onto UAOnline. The Office of the Bursar does not have access to this information.
**UAF employees: You must enroll in direct deposit for your student account even if you have enrolled in direct deposit for payroll or travel.**
Personal checks
If tuition and fees were paid with a personal check, the refund will be processed once your check has cleared the bank. This includes electronic check payments made via UAOnline. The waiting period may be reduced by bringing proof to the Office of the Bursar that your check has cleared.
Credit card
If tuition and fees are paid for the semester by credit card, the credit card will be refunded (up to the amount paid). Any remaining credit balances will be refunded by check or direct deposit. If your tuition was paid through external sources such as financial aid, federal loans, scholarships or grants, you will receive your refund as a check sent to your mailing address of record or direct deposited in your bank account.
Important information
All refunds are subject to federal regulations. A refund received due to dropped classes or a total withdrawal may render a student ineligible for scholarships or financial aid. As a result, funds may be returned to the lender or grantor pursuant to all applicable rules and regulations resulting in a balance due to the university. Students are encouraged to contact Financial Aid before dropping or withdrawing from classes to determine the impact of these actions prior. Any aid received through external funding (e.g., Department of Labor, Department of Vocational Rehabilitation) will generally be returned to the original funding source. Students receiving this form of aid should contact their campus representative for more assistance. | https://www.uaf.edu/bursar/for-students/refund-processing.php |
“Era of danger (to the republic) ends, era of difficulties starts” optimistically proclaimed the ideologues of the French Republic. Nonetheless recent Paris terrorist attacks and numerous other failed attempts unearthed major ideological fault lines underpinned by a considerable social alienation of a portion of the very own children of the republic!
Undoubtedly there are numerous ideological underpinnings that play important roles in such acts of violence. Moreover there are several international contexts with unconcealed amount of ethno-religious line-ups whose impacts are hard to undervalue at a domestic level. In particular the Middle East appears to be providing an inexorable source of ideological activism with ongoing conflicts and pervasive injustice inflicted on ethnic and religious grounds. Notwithstanding here I wish to highlight other internal political factors within the borders of the Europe that could provide powerful grounds for the legitimation of violence.
Civic rights and duties
In a recent speech at the Arab World Institute , President Francois Hollande emphatically underlined the “republican rights and duties” of all French citizens including the Muslim minority. The Prime Minister Manuel Valls in denouncing social discriminations on spatial levels went even as far as assimilating certain impoverished parts of France to instances of social apartheid. Such affirmations while clearly highlight some of the socio-spatial factors of segregation and even an institutionalized top-down or bottom-up processes of ghettoization, do not appear to go far enough to adequately highlight the republican normative significance of such destructive political realities by being trapped in the prevailing imperative of moral condemnation of violence.
Furthermore it appears that such discourses of civil reconciliation based on a consensus on the principles of rights and punishment provide a mere fig leaf and a modicum of consolation in the face of the fundamental deficiency in the republican provisions to protect and promote the central principle of liberty on both private and public levels which characterise and indeed qualify a republican political system.
Republican political theory
As far as the republican ideology is concerned it is important to illustrate its main tenets in order to assess the degree of their socio-political pertinence. The first pillar of the republican doctrine of state as envisaged by its main ideologues is a provision against what has been labelled as imperium i.e the state becoming a dominating force in the life of the free citizens. It goes without saying that at this vertical level in major western democracies a lot has been done to regulate and balance the public power and make the political system more representative, transparent and accountable, needless to add that a lot has yet to be done on these grounds to attain the republican ideals.
Notwithstanding it is the second pillar of the republican political doctrine which is the one that does not seems to have found its adequate space in such systems claiming to be upholding republican ideals. This principle explicitly concerns guidelines against the private domination: dominium to use classic republican terminology. This principle is related to the horizontal management of power relation in the sense that the republican citizens not only should be free from any economic and material domination but also they should be empowered by means of various provisions of distributive fairness and social justice. Unlike the liberal counterpart, the Republican humanist state does not stop at a mere non-interference in the private spaces leaving a great amount of autonomy to self-regulating balances of horizontal powers. Rather a republican state is even authorized to positively regulate and even aggressively intervene at social spaces to guarantee the absence of cultural, normative and economic domination.
A republican failure
It is not hard to observe that this is the very exact area where the respublica seems to have blatantly failed. This fundamental failure in providing adequate republican guarantees of social justice seems to have turned various discriminatory spaces into fertile breeding grounds for all social malaise of most western democracies. At a wider global level a new study has alarmingly highlighted that we are moving towards the point that 1% of the world population possessing as much as the remaining 99%! Thomas Piketty’s prominent work in this field is a good indicator of the fact that the global economic development has not translated into a reduction of the social inequality resulting in a considerable fragmentation and alienation due to inequitable access to the social resources.
As far as France is concerned, although a lot has been accomplished on these accounts in comparison to most other western democracies, it is evident that a significant amount of the republican ideals have been systematically overlooked by a system that has shown to be more inclined to accommodate economic imperatives of the liberal politics. It is not hard to observe that a considerable portion of the republican directives for social justice in the form of equality and class mobility appear to have been side-lined by the overriding rules of the free market and paralyzing principles of the state neutrality.
In this light if we consider every single act of terrorism and their perpetuators, it is not difficult to identify pronounced traces of social injustice that should have been at the heart of all political programs of a republican polity. On the other hand an accentuated deficiency in promoting the civic republican values through the so-called “politics of belonging” have also played a prominent role. The attackers at the Charlie Hebdo magazine were the examples par excellence of such portions of the republic that have fallen outside the purviews of the inclusive republican provisions.
Hence although the ideology continues to loom large in the immediate background, the failure of the state itself to fulfil basic republican political promises and guarantee adequate levels of distributive fairness regarding the socio-economic resources of the commonwealth should not be undervalued either. Thus the absence or even the inadequacy of such republican provisions of comprehensive social justice not only confronts the republican model with a veritable “era of difficulty” but also exposes the entire republican political edifice to an existential “era of danger” fatally threatening the entirety of the republican social contract. | https://www.polity.eu/?p=65 |
I’m Acting Lead Environment Artist at CD PROJEKT RED. Currently, my work is focused on Cyberpunk 2077, a project that has become my passion. As a huge fan of the genre, I couldn’t ask for a more compelling challenge. Earlier, I worked on The Witcher 3: Wild Hunt and the final expansion, Blood and Wine. I also collaborated with The Farm 51 on such titles as Painkiller HD, Deadfall Adventures and Get Even. I’ve been a passionate gamer since I was a child. Now, I’ve been living the dream in the game development industry for more than 7 years, mostly working on AAA titles.
Building Night City
This talk will share the process of creating Nigh City from the early design stage up to the final quality pass. Explaining what was our vision and key art direction pillars for the dystopian world of Cyberpunk 2077. I will explain how we handcrafted such unique and one of the most detailed open world cities in games. Showcase some of the key pipelines and techniques we learned to create big environments in a fast manner while maintaining consistent quality. How environment art and other teams in our company go hand in hand to achieve stunning, huge open world locations that are telling deep and compelling stories.
Evolution of art production – discussion panel
The game development world is in a very demanding place, more than ever. We are creating very complex and bigger worlds. On top of that, we want to find ways and creative solutions for creating art faster and more productive.
Let’s talk with our guests and see how they predict the future of game art development and what we can expect for years to come. | https://digitaldragons.pl/programme/speakers/kacper-niepokolczycki/ |
Determining the role of dense gas in star formation
We are observing a large, mass-selected sample of dust-continuum traced, star-forming molecular clouds in HCN J=3-2 and HCO+ J=3-2 with ‘Ū’ū. This sample includes clouds in the Central Molecular Zone (CMZ), the Inner Galaxy, and the Outer Galaxy. Dense gas is vital to the star-formation process, and high-resolution observations of this dense gas in a large sample of resolved star-forming sources is crucial to understanding its exact role in regulating star-formation efficiency.
Predictive, empirical relationships of star formation, such as the Kennicutt-Schmidt law, are able to link the scaling of the star-formation rate surface density with the surface density of the gas. However, this relationship only holds for normal and dwarf galaxies, and becomes super-linear in starburst systems and breaks down on the smallest scales of individual giant molecular clouds. However, when dense-gas observations are used, these relationships survive, once again indicating the apparent importance of dense gas in the star-formation process.
The key science outcomes and goals of this project are:
- Understand the impact of Galactic environment on the physics of dense gas, allowing for an understanding of how dense gas is produced and intrinsically linked to star formation.
- Distinguish between star-formation theories, and whether the star-formation rate is controlled by the free-fall time within bound structures or the amount of dense gas available for star formation.
- Produce LIR – Lgas relationships linking resolved Galactic clumps, Galactic molecular clouds, extragalactic systems and ULIRGS to study the universality of the star-formation process.
- Determine the cause of variations of the HCN/HCO+ ratio, and how it is linked to the physical conditions caused by Galactic environment.
- Find a sample of extreme star-forming sources using maps of dense-gas mass fraction and a sample of Galactic mini-starbursts using a LIR – Lgas relationship produced using CO maps.
- Link the clump-mass fraction to the star-formation efficiency and clump-formation efficiency.
- Identify outflows and active regions of star formation and determine the infall rates of the gas into individual clumps
- Provide a legacy sample matching those of extragalactic studies for future studies. | https://www.eaobservatory.org/jcmt/science/large-programs/majors-massive-active-jcmt-observed-regions-of-star-formation/ |
The European e-Leadership Initiative (www.eskills-guide.eu) aims to support communication about e-leadership educational programmes, that is, programmes delivering learning outcomes contributing to e-leadership skills. The support provided is to enable expected future e-leadership skills requirements to be defined with the input of employers as well as researchers. A key instrument of the approach is the e-leadership curriculum profile. Key stakeholders agree on appropriate sets of learning outcomes clustered in one or more validated curriculum profile specification document which is made available to all stakeholders. Business schools and universities in Europe are then being invited to use the self-assessment instruments to map relevant programmes they offer in the market to one of the curriculum profiles developed which will allow to present their higher (executive) education offers to the interested public in a comparable and transparent way. In the preliminary governance rules for the initiative, it has been decided that all higher (executive) education offers are to provide a thorough self-assessment, using a standard template, and that educational institutions can apply for external quality assessment.
A set of e-leadership curriculum profiles, the template and tool for documenting the key e-leadership features of the business school’s and university’s programmes and the catalogue of quality criteria for e-Leadership education have been developed and applied by different higher and executive education institutions throughout Europe following a number of steps. Typically the submissions originate directly from the person responsible for the educational offer. Provided the origin of the template can be verified, a match with a curriculum profile is presented, the person responsible for the programme has been duly informed, it is intended to include the educational offer in the published list of e-leadership programmes.
The successful use of the self-assessment tool and mapping of existing courses onto one of the e-leadership curriculum profiles will result in the educational programme to be included in the register of e-leadership programmes in Europe, to be published on a dedicated web site for the e-Leadership initiative soon.
So far 18 business schools and universities from 11 European countries have used the self-assessment tool and carried out a mapping of their programmes to a curriculum profile. In several cases this has helped responsible actors in identifying areas where further development may be required to reach a fully-fledged e-leadership programme defined by the existing e-leadership curriculum profiles. Besides using the guidelines and self-assessment tool for future programme development it has in one case resulted in the development of a completely new e-leadership programme (Information Security Management launched by TIAS and Antwerp School of Management) for more details see above) which will be started with a first cohort of students on 22 January 2015. In other cases programme revisions have been carried out already and resulted in the development and launch of adapted and further developed course like for instance at IE Business School in Madrid to be taught from November 2014 onwards or the revised Business Enterprise Architecture e-leadership programme currently (October 2014) being taught with a further cohort of students by Henley Business School (UK), TIAS (NL), Technical University Munich (DE).
At a later stage in the lifetime of the European e-Leadership Initiative online feedback options will be offered through which the public, including interested students, employers and other stakeholders, will also be enabled to read, review and comment on the standard programme description in the submitted format/template.
In addition interested business schools and universities may apply for cost-effective, peer-based quality assessment of programmes against the quality criteria, including in particular assessment of conformance to the selected e-leadership curriculum profile. The costs for quality assessment can be reduced where relevant existing national or international accreditation and certification can be taken into account. | https://eskills-guide.eu/index.php?id=1626&tx_ttnews%5Btt_news%5D=731&cHash=9f29058ab1197958115800be407cce1d |
On today's blog, discover the works of printmakers Andy Lovell and Luella Martin, whose atmospheric landscapes feature in our current Watts Contemporary exhibition, In Print: Capturing Light.
Luella Martin
Luella Martin grew up and was educated in London. She studied at Hornsey & Byam Shaw Colleges of Art with post-graduate studies at Goldsmiths College, London. She travelled extensively in Europe and lived in Australia for twelve years. She returned to England in 1997 and is now working full-time in her studio on the south coast where she paints and prints her solar etchings.
Solar plate etching is an eco-friendly, modern way of working which uses sunlight (or UV lamp) and tap water to process steel light-sensitive plates without harmful chemicals. The artwork is put onto acetate film and placed over the light sensitive plate. It is exposed to sunlight or UV lamp and then washed out using tap water. After processing and drying the steel plates are inked and wiped by hand and the image is printed on to dampened etching paper 'intaglio' using a traditional etching press. Each of Luella's solar etchings can go through up to 14 different steps between the original idea and the finished framed piece.
'I believe that looking at our surroundings is examining the effect of light striking different materials. In my camera, light is translated into a digital signal and when I make a solar plate etching I use light to transfer information on to the steel plate. Atmosphere and mood come from looking at different times of day, or in different weathers or seasons and my challenge is to find that and to use colour to emphasise my feelings.
Although I reference the real world, my pieces are often about 'nowhere in particular' - a quiet corner of the South Downs, or some reflections in a pond. My etchings are charged with atmosphere - a distillation of visual information mixed with memories and colourful interventions, they are a very personal response.'
Andy Lovell
Andy Lovell studied illustration and printmaking at Liverpool School of Art and Design and now works as a fine art printmaker. He lives and works in Stroud, Gloucestershire and exhibits regularly in galleries across the UK, America and Australia.
'My subject matter divides between landscape, seascape and cityscape. Painting on location is, by its nature, at the whim of the weather which is in constant flux. Capturing the play of light through shadow and highlight, light and dark is key to evoking the essence of the scene in front of you.
Once back in my studio I develop and evolve a silkscreen print or monotypes from the raw paintings that I have produced, often using the white of my paper as my purest white to break the edges of the image. By doing this the sense of light is heightened and plays a more dynamic role in the scene.
The silkscreen process lends a natural simplification to a picture imbuing it with a paired back graphic quality, whilst the nuances of the marks and depth of blacks that the monotype process offers are particularly effective at evoking very dramatic lighting in the landscape.'
Interested in seeing Andy Lovell and Luella Martin's works in person? Click here to find out more about In Print: Capturing Light. All works for sale.
Banner image: Andy Lovell, | https://www.wattsgallery.org.uk/about-us/news/meet-andy-lovell-and-luella-martin/ |
Site 1 was at Forman, where SOM content was 4.5%, under long-term strip-till, and previous crop was soybeans. The 15 fertilizer treatments comprised of all combinations of N rates at 0, 60, 120, 180, and 200 lbs with sulfur at 0, 10, 20 lbs. The soil had 18 lbs N, and 50 lbs S before fertilizer treatments were applied. Each treatment was replicated 5 times. Because S was applied as ammonium sulfate, (which contains S and N) 18 lbs N was contributed to every treatment that received 20 lbs S. Therefore, an equivalent amount of 18 lbs N was added to all N rates. Ear leaf sulfur was analyzed to verify if it can be used to diagnose sulfur status of corn and relate to final yields.
At Carrington, SOM was 3.1%, and previous crop was field pea, N rates were 0, 73, 128, 155 lbs and S rates as ammonium sulfate, were 0, 10, 20 lbs.
Meanwhile at Forman, application of sulfur at 10 lbs significantly improved yields by 12 bushels (a 6% increase). Yields were not different between 10 and 20 lbs S. Average yield at 10 lbs S was consistently greater than at 0 lbs S, for all levels of N (Fig 1).
Yield at 120 lbs N was not statistically different from 180 and 240 lbs N, but was greater than yields at 0 and 60 lbs N. An estimate (prediction) of 193 lbs N was calculated as a rate to produce maximum yield of 227 bushels for this site in 2017. This 193 lbs almost equaled the actual total N applied at 120 lbs N + N credit (18 lbs from soil, 40 lbs from soybeans previous crop, and 18 lbs from ammonium sulfate) for a total of 196 lbs N. This result supports the use of N credits when making N fertilizer recommendations.
Nitrogen use efficiency was calculated as the agronomic efficiency of N use (AEN), which is the unit yield (bushel) produced per unit of N fertilizer applied. Figure 2 shows that AEN declines as N fertilizer rates increased. When N rate is close to 120 lbs N or below, AEN is greater at 10 lbs S than at 0 lbs S.
Yields had a weak relationship to S content, and N:S ratio, which implies that neither ear leaf S or N:S ratio at the reproductive growth stage was reliable to tell if S application would enhance yield.
Nitrogen use efficiency was improved by sulfur but only up to the level where additional N did not improve yields significantly.
Even though the soil test levels were reported, and would be considered high, it must be emphasized that soil tests for S is unreliable and cannot be used to decide whether to apply S or not. | https://www.ndcorn.org/corn-performance-and-nitrogen-use-efficiency-in-response-to-sulfur-and-nitrogen-fertilization-levels-and-interaction/ |
What have been used to develop a
What are the major defining characteristics of a civilization?How has the defining characteristics of a civilization in the past been evolved to better people's lives today? Many factors have been used to develop a civilization.Some have been more effective than others have.Throughout this paper, I plan on analyzing the factors that perennial civilizations before our time used to become effective and prosperous.I will also describe what factors they had and how some of those factors became very important to today 's society.Most of the perennial civilizations have been revolutionized throughout the years to enhance the way of life even today.
Many people have debated if civilization was evolutionary or passed down progressively with people learning from their mistakes.Some people claim other reasons for civilization to be far greater than the others.Other people claim that all of the reasons in the coming about of civilization played a balance part .I really don't know which reason were more influential than the others, but I do know that all these "theories" are hard to be proven flawless. Civilization represents the highest level of human organization.But, how did civilization get to be the impressive and astonishing wayit is today? | https://aero-net.org/world-civilization/ |
"Camera coverage must enable recording of the customer(s) and employee(s) facial features with sufficient clarity to determine identity"
"All camera views of all Limited Access Areas must be continuously recorded 24 hours a day. The use of motion detection is authorized when a Licensee can demonstrate that monitored activities are adequately recorded."
Basis and Purpose – R 306 The statutory authority for this rule is found at subsections 12-43.4-202(2)(b),12-43.4-202(2)(d), and 12-43.4- 202(3)(a)(V), and section 12-43.4-701, C.R.S. Authority also exists in the Colorado Constitution at Article XVIII, Subsection 16(5)(a)(VI). The purpose of this rule is to ensure adequate control of the Licensed Premises and Retail Marijuana and Retail Marijuana Product contained therein. This rule also establishes the minimum guidelines for security requirements for video surveillance systems for maintaining adequate security.
A. Minimum Requirements. The following video surveillance requirements shall apply to all Retail Marijuana Establishments.
1. Prior to exercising the privileges of a Retail Marijuana Establishment, an Applicant must install a fully operational video surveillance and camera recording system. The recording system must record in digital format and meet the requirements outlined in this rule.
2. All video surveillance records and recordings must be stored in a secure area that is only accessible to a Licensee’s management staff.
3. Video surveillance records and recordings must be made available upon request to the Division, the relevant local jurisdiction, or any other state or local law enforcement agency for a purpose authorized by the Retail Code or for any other state or local law enforcement purpose.
4. Video surveillance records and recordings of point-of-sale areas shall be held in confidence by all employees and representatives of the Division, except that the Division may provide such records and recordings to the relevant local jurisdiction, or any other state or local law enforcement agency for a purpose authorized by the Retail Code or for any other state or local law enforcement purpose.
1. Video surveillance equipment shall, at a minimum, consist of digital or network video recorders, cameras capable of meeting the recording requirements described in this rule, video monitors, digital archiving devices, and a color printer capable of delivering still photos.
2. All video surveillance systems must be equipped with a failure notification system that provides prompt notification to the Licensee of any prolonged surveillance interruption and/or the complete failure of the surveillance system.
3. Licensees are responsible for ensuring that all surveillance equipment is properly functioning and maintained, so that the playback quality is suitable for viewing and the surveillance equipment is capturing the identity of all individuals and activities in the monitored areas.
4. All video surveillance equipment shall have sufficient battery backup to support a minimum of four hours of recording in the event of a power outage. Licensee must notify the Division of any loss of video surveillance capabilities that extend beyond four hours.
1. Camera coverage is required for all Limited Access Areas, point-of-sale areas, security rooms, all points of ingress and egress to Limited Access Areas, all areas where Retail Marijuana or Retail Marijuana Product is displayed for sale, and all points of ingress and egress to the exterior of the Licensed Premises.
2. Camera placement shall be capable of identifying activity occurring within 20 feet of all points of ingress and egress and shall allow for the clear and certain identification of any individual and activities on the Licensed Premises.
3. At each point-of-sale location, camera coverage must enable recording of the customer(s) and employee(s) facial features with sufficient clarity to determine identity.
4. All entrances and exits to the facility shall be recorded from both indoor and outdoor vantage points.
5. The system shall be capable of recording all pre-determined surveillance areas in any lighting conditions. If the Licensed Premises has a Retail Marijuana cultivation area, a rotating schedule of lighted conditions and zero-illumination can occur as long as ingress and egress points to Flowering areas remain constantly illuminated for recording purposes.
6. Areas where Retail Marijuana is grown, tested, cured, manufactured, or stored shall have camera placement in the room facing the primary entry door at a height which will provide a clear unobstructed view of activity without sight blockage from lighting hoods, fixtures, or other equipment.
7. Cameras shall also be placed at each location where weighing, packaging, transport preparation, processing, or tagging activities occur.
8. At least one camera must be dedicated to record the access points to the secured surveillance recording area.
9. All outdoor cultivation areas must meet the same video surveillance requirements applicable to any other indoor Limited Access Areas.
1. The surveillance room or surveillance area shall be a Limited Access Area.
2. Surveillance recording equipment must be housed in a designated, locked, and secured room or other enclosure with access limited to authorized employees, agents of the Division and relevant local jurisdiction, state or local law enforcement agencies for a purpose authorized by the Retail Code or for any other state or local law enforcement purpose, and service personnel or contractors.
3. Licensees must keep a current list of all authorized employees and service personnel who have access to the surveillance system and/or room on the Licensed Premises. Licensees must keep a surveillance equipment maintenance activity log on the Licensed Premises to record all service activity including the identity of the individual(s) performing the service, the service date and time and the reason for service to the surveillance system.
4. Off-site Monitoring and video recording storage of the Licensed Premises by the Licensee or an independent third-party is authorized as long as standards exercised at the remote location meet or exceed all standards for on-site Monitoring.
5. Each Retail Marijuana Licensed Premises located in a common or shared building, or commonly owned Retail Marijuana Establishments located in the same local jurisdiction, must have a separate surveillance room/area that is dedicated to that specific Licensed Premises. Commonly owned Retail Marijuana Establishments located in the same local jurisdiction may have one central surveillance room located at one of the commonly owned Licensed Premises which simultaneously serves all of the commonly-owned retail facilities. The facility that does not house the central surveillance room is required to have a review station, printer, and map of camera placement on the premises. All minimum requirements for equipment and security standards as set forth in this section apply to the review station.
6. Licensed Premises that combine both a Medical Marijuana Business and a Retail Marijuana Establishment may have one central surveillance room located at the shared Licensed Premises. See Rule R 304 – Medical Marijuana Business and Retail Marijuana Establishment: Shared Licensed Premises and Operational Separation.
2. All surveillance recordings must be kept for a minimum of 40 days and be in a format that can be easily accessed for viewing. Video recordings must be archived in a format that ensures authentication of the recording as legitimately-captured video and guarantees that no alteration of the recorded image has taken place.
3. The Licensee’s surveillance system or equipment must have the capabilities to produce a color still photograph from any camera image, live or recorded, of the Licensed Premises.
4. The date and time must be embedded on all surveillance recordings without significantly obscuring the picture.
6. After the 40 day surveillance video retention schedule has lapsed, surveillance video recordings must be erased or destroyed prior to: sale or transfer of the facility or business to another Licensee; or being discarded or disposed of for any other purpose. Surveillance video recordings may not be destroyed if the Licensee knows or should have known of a pending criminal, civil or administrative investigation, or any other proceeding for which the recording may contain relevant information.
1. All records applicable to the surveillance system shall be maintained on the Licensed Premises. At a minimum, Licensees shall maintain a map of the camera locations, direction of coverage, camera numbers, surveillance equipment maintenance activity log, user authorization list, and operating instructions for the surveillance equipment.
2. A chronological point-of-sale transaction log must be made available to be used in conjunction with recorded video of those transactions. | https://www.arcdyn.com/solutions/cannabis-industry-security-systems/colorado-med-cannabis-security-compliance.html |
500 Women Scientists, standing up for inclusivity and integrity in the scientific enterprise.
Dear President Trump,
You said you wanted to heal the country, bridge our ideological divides, and be a president for everyone. Now that you have been sworn in, we want to encourage you to do just that.
Science touches the lives of every person, and as President, you have the opportunity to set the priorities of the vast American scientific enterprise. Scientific progress is built on diversity and innovation and only works when we encourage openness and contribution from everyone - scientists of different genders, races, classes, creeds, cultures, and perspectives. Encouraging such inclusivity ensures that scientific research is critically evaluated from every angle. Just like a business, science fails when it is done in a vacuum with a small number of like-minded voices. Investing in science and education also translates to higher wages and supports our country’s innovation agenda.
We are women scientists and we are members of diverse racial, ethnic, and religious groups. We are immigrants. We are people with disabilities. We are LGBTQIA. To represent us, you must embrace our inherent diversity. We will continue to contribute to the American scientific enterprise, and we invite you to do the same.
Here are eight concrete ways you can promote women and our contributions to society and to science:
These ideas and policies not only support women scientists, but they also help the US continue to lead in scientific innovation.
Science is nonpartisan. We either thrive together or we fail together. American innovation and advancement over the next four years depends on your support. Our ability to use scientific and technological knowledge will increasingly determine our well-being and quality of life. We hope you will choose to engage positively with the scientific enterprise, advance policies that are based on evidence, and work collaboratively with the experts that provide this tremendous and necessary service for our nation. Many of these experts are women and scientific discovery and problem-solving is impossible without our contributions. Only when all our rights are protected will we be free to discover, create, innovate, and build for our great nation.
Sincerely, | https://www.forbes.com/sites/thelabbench/2017/01/22/an-open-letter-to-president-trump-from-500-women-scientists/ |
Species with extensive ranges experience highly variable environments with respect to temperature, light and soil moisture. Synchronizing the transition from vegetative to floral growth is important to employ favorable conditions for reproduction. Optimal timing of this transition might be different for semelparous annual plants and iteroparous perennial plants. We studied variation in the critical photoperiod necessary for floral induction and the requirement for a period of cold-chilling (vernalization) in 46 populations of annuals and perennials in the Mimulus guttatus species complex. We then examined critical photoperiod and vernalization QTLs in growth chambers using F2 progeny from annual and perennial parents that differed in their requirements for flowering. We identify extensive variation in critical photoperiod, with most annual populations requiring substantially shorter day lengths to initiate flowering than perennial populations. We discover a novel type of vernalization requirement in perennial populations that is contingent on plants experiencing short days first. QTL analyses identify two large-effect QTLs which influence critical photoperiod. In two separate vernalization experiments we discover each set of crosses contain different large-effect QTLs for vernalization. Mimulus guttatus harbors extensive variation in critical photoperiod and vernalization that may be a consequence of local adaptation. | https://experts.syr.edu/en/publications/major-qtls-for-critical-photoperiod-and-vernalization-underlie-ex |
Chuang Tzu took Lao Tzu's mystical teachings and illuminated their value in everyday life. His teaching encouraged the foundation of modern Taoism and stimulated the development of Zen Buddhism. Chang Tzu's teachings, however, are still challenging to read and to understand. Therefore, in Teachings of Chuang Tzu's treasures and make their understanding complete. An invaluable guide for spiritual practitioners and lovers of eternal truth. | https://www.chapters.indigo.ca/en-ca/books/teachings-of-chuang-tzu-attaining/9780937064184-item.html |
Discourse around the climate crisis tends to focus on the weather-related effects, such as rising sea-levels and intense hydrological incidents such as flooding and droughts, as well as the direct impact on human lives, like famines, forced migration and geopolitical shifts. Less has been said about the impact the climate crisis could have on human conflict and the implications it could present for the future.
—
Cornell University professor Gary Evans explored this proposition and found a link between the climate crisis and large-scale social behaviour. He identified rising temperatures, increased frequency and severity of droughts, flooding and storms, and air pollution as the main drivers of climate change-related societal disruption.
Evans categorised these impacts into three groups, namely heat, weather disasters and air pollution. This is how the fate of climate and society has intertwined:
You might also like: Asia’s Battle Against Plastic Waste
Which current conflict is a direct result of climate change?
- Temperature, Mental Health And Quality of Life
Starting from 1993, an 11-year long analysis of all deaths in the United Kingdom concluded that when temperatures exceeded 18°C, there was a 3.8% increase in the relative risk of suicide for each 1°C increase. Indeed, when ambient temperatures rise well above mean levels, mental health admissions to hospitals increase. However, while temperature is associated with mental health and quality of life, the direct association of rising temperatures with mental health is stymied by the complexity of suicide cases.
A panel study examining 67 countries concluded that the warmer the coldest month of the year, the happier the country and the warmer the hottest month of the year, the less happy the country. The study included variables such as economic indicators, sociocultural factors and life expectancy to rule out alternative explanations for the differences in happiness.
Furthermore, the study used projected changes in temperature to predict happiness levels over 30 and 60 years and found that as temperatures increase, countries at higher latitudes may become happier, while tropical and subtropical countries may become unhappier.
- Social Interaction, Crime and Conflict
As the climate crisis intensifies, an increase in crime could be seen, particularly at lower latitudes. A study found that given existing US data on assaults, murders and annual temperatures in a set of 50 US cities over a 48- year period, an average annual increase of 2°F in the US would result in a staggering 24,000 additional murders/assaults each year.
Studies looking at fluctuations in temperature in the same populations over time show increased intergroup conflict, especially in low income, agriculturally-dependant regions. For example, increased temperatures result in reduced rainfall, damaging crop yields and leading to economic distress and resource scarcity. Additionally, economic pressure caused by insufficient infrastructure and unemployment may exacerbate climate-related migration.
The climate crisis may strengthen authoritarian trends globally, as discussed in a study published in the Journal of Environmental Psychology. Increased authoritarianism is directly linked to an increased perceived threat level; that is, situations that are troubling or distressing to an individual, which may result in populations becoming more polarised and discriminatory towards minorities and those at the margin of society.
Eritrea is one such example. 5,000 refugees flee from its borders every month. Not incidentally, it’s one of Africa’s most food-insecure nations and and a one-party state with one of the worst human-rights records in the world.
- Armed Conflicts
Armed conflicts over a 30-year period were coincident nearly 10% of the time with major heat waves or droughts and in countries with a high degree of ethnic fragmentation, the incidence was 23%. According to an article published in PNAS, this has far-reaching implications as countries vulnerable to climate change are set to suffer disproportionately from rising temperatures. The most fragile states often couple an economy of basic subsistence with deep ethnic divides. Middle Eastern countries with quarreling ethnic groups, for example Syria and Afghanistan, both experienced prolonged droughts that ravaged agricultural output at crucial moments in their recent history. The Pentagon also found a causal link between the climate crisis and human conflicts (for example, the ongoing Syrian conflict), but only when other conditions and factors such as drought severity and the pre-existing likelihood of conflict were present at a high enough level to ignite armed conflict.
Overall, Evans’s review indicates that behavioural changes stemming from rising temperatures will have mostly negative consequences and that without effective intervention, humans will become more violent and mental health will suffer. | https://earth.org/tag/heat/ |
In the SCAPE Project, the memory institutions are working on practical application scenarios for the tools and solutions developed within the project. One of these application scenarios is the migration of a large image collection from one format to another.
There are many reasons why such a scenario may be of relevance in a digital library. On the one hand, conversion from an uncompressed to a compressed file format can significantly decrease storage costs. On the other hand, particularly from a long-term perspective, file formats may be in danger of becoming obsolete, which means that institutions must be able to undo the conversion and return to the original file format. In this case a quality assured process is essential to allow for reconstruction of the original file instances and especially to determine when deletion of original uncompressed files is needed – this is the only way to realize the advantage of reducing storage costs. Based on these assumptions we have developed the following use case: Uncompressed TIFF image files are converted into compressed JPEG2000 files; the quality of the converted file is assured by applying a pixel for pixel comparison between the original and the converted image.
For this, a sequential Taverna concept workflow was first developed, which was then modelled into a scalable procedure using different tools developed in the SCAPE Project.
The Taverna Concept Workflow
The workflow input is a text file containing paths to the TIFF files to be converted. This text file is then transformed into a list that allows the sequential conversion of each file, hence simulating a non-scalable process. Before the actual migration commences, validity of the TIFF file is checked. This step is realized by using FITS – a wrapper that applies different tools to extract the identification information of a file. Since the output of FITS is an XML-based validation report, an XPath service extracts and checks the validity information. If the file is valid, migration from TIFF to JPEG2000 can begin. The tool used in this step is OpenJPEG 2.0. In order to verify the output, Jpylyzer – a validator as well as feature extractor for JPEG2000 images created within the SCAPE Project – is employed. Again, an Xpath service is used to extract the validity information. This step concludes the file format conversion itself, but in order to ensure that the migrated file is indeed a valid surrogate, the file is reconverted into a TIFF file, again using OpenJPEG 2.0. Finally, in a last step the reconverted and the original TIFF files are compared pixel for pixel using LINUX based ImageMagick. Only through the successful execution of this final step can the validity as well as the possibility of a complete reconversion be assured.
Figure 1 (above): Taverna concept workflow
In order to identify how much time was consumed by each element of this workflow, we ran a test consisting of the migration of 1,000 files. Executing the described workflow on the 1,000 image files took about 13 hours and five minutes. Rather unsurprisingly, conversion and reconversion of the files took the longest: the conversion to JPEG2000 took 313 minutes and the reconversion 322 minutes. FITS validation needed 70 minutes and the pixel-wise comparison was finished in 62 minutes. The SCAPE developed tool Jypylizer required only 18 minutes and was thus much faster than the above mentioned steps.
Figure 2 (above): execution times of each of the concept workflows' steps
Making the Workflow Scale
The foundation for the scalability of the described use case is a Hadoop cluster containing five Data Nodes and one Name Node (specification: see below). Besides having economic advantages – Hadoop runs on commodity hardware – it also bears the advantage of being designed for failure, hence reducing the problems associated with hardware crashes.
The distribution of tasks for each core is implemented via MapReduce jobs. A Map job splits the handling of a file. For example, if a large text file is to be processed, a Map job divides the file into several parts. Each part is then processed on a different node. Hadoop Reduce jobs then aggregates the outputs of the processing nodes again to a single file.
But writing MapReduce jobs is a complex matter. For this reason, the programming language Apache Pig is used. Pig was built for Hadoop and translates a set of commands in a language called “Pig Latin” into MapReduce jobs, thus making the handling of MapReduce jobs much easier or, as Professor Jimmy Lin described the powerful tool during the ‘Hadoop-driven digital preservation Hackathon’ in Vienna, easy enough “… for lazy pigs aiming for hassle-free MapReduce.”
Hadoop HDFS, Hadoop MapReduce and Apache Pig make up the foundation of the scalability on which the SCAPE tools ToMaR and XPath Service are based. ToMaR wraps command line tasks for parallel execution as Hadoop MapReduce jobs. These are in our case the execution of FITS, OpenJPEG 2.0, Jpylyzer and ImageMagick. As a result, the simultaneous execution of these tools on several nodes is possible. This has a great impact on execution times as Figure 3 (below) shows.
The blue line represents the non-scalable Taverna workflow. It is clearly observable how the time needed for file migration increases in proportion to the number of files that are converted. The scalable workflow, represented by the red line, shows a much smaller increase in time needed, thus suggesting that scalability has been achieved. This means that, by choosing the appropriate size for the cluster, it is possible to migrate a certain number of image files within a given time frame.
Figure 3 (above): Wallclock times of concept workflow and scalable workflow
Below is the the specification of the Hadoop Cluster where the master node runs the jobtracker and namenode/secondary namenode daemons, and the worker nodes each runs a tasktracker and a data node daemon.
Master node: Dell Poweredge R510
- CPU: 2 x Xeon [email protected]
- Quadcore CPU (16 HyperThreading cores)
- RAM: 24GB
- NIC: 2 x GBit Ethernet (1 used)
- DISK: 3 x 1TB DISKs; configured as RAID5 (redundancy); 2TB effective disk space
Worker nodes: Dell Poweredge R310
- CPU: 1 x Xeon [email protected]
- Quadcore CPU (8 HyperThreading cores)
- RAM: 16GB
- NIC: 2 x GBit Ethernet (1 used)
- DISK: 2 x 1TB DISKs; configured as RAID0 (performance); 2TB effective disk space
However, the throughput we can reach using this cluster and pig/hadoop job configuration is limited; as figure 4 shows, the throughput (measured in Gigabytes per hour – GB/h) is rapidly growing when the number of files being processed is increased, and then stabilises at a value around slightly more than 90 Gigabytes per hour (GB/h) when processing more than 750 image files.
Figure 4 (above): Throughput of the distributed execution measured in Gigabytes per hour (GB/h) against the number of files processed
As our use case shows, by using a variety of tools developed in the SCAPE Project together with the Hadoop framework it is possible to distribute the processing on various machines thus enabling the scalability of large scale image migration and significantly reducing the time needed for data processing. In addition, the size of the cluster can be tailored to fit the size of the job so that it can be completed within a given time frame.
Apart from the authors of this blog post, the following SCAPE Project partners contributed to this experiment:
- Alan Akbik, Technical University of Berlin
- Matthias Rella, Austrian Institute of Technology
- Rainer Schmidt, Austrian Institute of Technology
By shsdev, posted in shsdev's Blog
Comments
There are no comments on this post.
Leave a comment
You must be logged in to post a comment. | https://openpreservation.org/blog/2014/06/24/will-real-lazy-pig-please-scale-quality-assured-large-scale-image-migration/ |
Why study mental health?
Wellcome conducted the first Global Monitor – the largest-ever study of public attitudes to science and health – in 2018. The first wave covered topics such as whether people trust science, scientists and information about health, and attitudes towards the safety and efficacy of vaccines – a focus which has since proved to be incredibly forward-thinking.
In 2020, a central focus of the Global Monitor was the role of science in mental health.
Mental health problems are holding back people of all ages in all parts of the world. The two most common mental health problems, anxiety and depression, affect over 400 million people worldwide. And by 2030, mental health issues are predicted to be the leading cause of global mortality and morbidity (1). Yet progress towards improving mental health around the world is lagging behind other areas of health.
In 2020, Wellcome launched its commitment to prioritise funding science that would help address mental health problems, with an initial focus on anxiety and depression in youth, to advance its vision of a world in which no one is held back by mental health problems.
By focusing on mental health – specifically, anxiety and depression – as part of the 2020 Wellcome Global Monitor report, Wellcome is seeking to help illuminate how the world views mental health science and to share insights into what scientists need to prioritise globally if new solutions are to be found.
Importantly, world views on health, mental health and science were in flux when the data were collected due to the pandemic. It is impossible to say how much or in what ways COVID-19 may have impacted the results, given that 2020 was the first time mental health-related questions were asked as part of the Global Monitor. Some findings, such as people’s likelihood of reporting spending time outdoors in response to anxiety or depression, may be particularly sensitive to the restrictions imposed during lockdowns in many places.
However, as questions about specific experiences were framed historically, we believe that the results reflect people’s long-term attitudes and experiences. For example, respondents were asked whether they had ever experienced anxiety or depression and what approaches to feeling better they had used at that time. Nonetheless, it is possible that the pandemic increased people’s likelihood of saying they have experienced anxiety or depression.
Finally, the mental health questions on which this report is based were not the only questions included in the 2020 Global Monitor. Additional question sets in the survey update results from the 2018 Monitor on public views of science and health, including opinions about trust in the scientific and healthcare communities.
The 2020 Monitor also included several questions on public perceptions of climate change and the COVID-19 pandemic, which will be explored in a future report.
We hope the 2020 Wellcome Global Monitor provides some interesting insights and sparks new conversations. The data are freely available, and we encourage people to explore them and hypothesise as they see fit.
To access the datasets and tables that contain the mental health results by country and demographic group, visit: https://wellcome.org/reports/wellcome-global-monitor-mental-health/2020. | https://wellcome.org/reports/wellcome-global-monitor-mental-health/2020/introduction |
The Ancient Theatre of Dionysus in Athens, Attica: The Theatre of Dionysus is regarded as the first sample of Greek theatres and the birthplace of the Greek drama. It was built into a natural hallow at the southern slopes of the Acropolis and it is the first theater in the world. This ancient theater was dedicated to Dionysus, the god of wine making and ecstasy, whose festivals was the driving force behind the development of Greek theater. Probably established in the late 6th century, the theater has been reconstructed many times since then.
During the 5th century, the theatre was first used as a regular site for theatrical performances of plays written by the great tragic poets, such as Aeschylus, Euripides and Sophocles. The theater was a place to honor god Dionysus and the plays were performed as part of these celebrations. In fact, a large statue of the god had been placed in the front row so that the god himself could watch the plays and the sacrifices to his name.
Many disputes have risen concerning the original structure of the theatre. Apparently the biggest part of the theater was originally made of wood but it was later rebuilt in stone. In 330 B.C. stone seats were added that could host up to 17,000 people. The stage was reconstructed over the centuries and most of the ruins that we see today come from the Roman times. At its final form, the lower part had 13 more sections that were separated by steps and 32 rows of seats covering the perimeter of the orchestra. The upper part had another 32 rows of seats covering only the centre. Later on a third part was added.
Today, only 20 of these sections have been preserved. The inscriptions on some of the thrones reveal that they belonged to elected rulers, while the other seats were intended for citizens. However, the most impressive seat was bearing the inscription Priest of Dionysus Eleftherius carved with bunches of grapes. Important efforts have started over the last years so that the ancient theatre of Dionysus will be renovated and host theatre performances again. | https://www.greeka.com/attica/athens/athens-excursions/dionysus-theatre.htm |
Leucine is a dietary amino acid with the capability to directly stimulate myofibrillar muscle protein synthesis. This impact of leucine arises arise from its role as an activator of the mechanistic target of rapamycin (mTOR), a serine-threonine protein kinase that regulates protein biosynthesis and cell growth. The activation of mTOR by leucine is moderated through Rag GTPases, leucine binding to leucyl-tRNA synthetase, leucine binding to sestrin 2, and possibly other systems.
Metabolism in people
Leucine metabolism takes place in numerous tissues in the human body; however, most dietary leucine is metabolized within the liver, adipose tissue, and muscle tissue. Adipose and muscle tissue use leucine in the development of sterols and other substances. Combined leucine usage in these 2 tissues is 7 times greater than in the liver.
In healthy people, around 60% of dietary l-leucine is metabolized after a number of hours, with approximately 5% (2– 10% variety) of dietary l-leucine being transformed to β-hydroxy β-methylbutyric acid (HMB). Around 40% of dietary l-leucine is transformed to acetyl-CoA, which is consequently used in the synthesis of other substances.
The large majority of l-leucine metabolism is initially catalyzed by the branched-chain amino acid aminotransferase enzyme, producing α-ketoisocaproate (α-KIC). α-KIC is primarily metabolized by the mitochondrial enzyme branched-chain α-ketoacid dehydrogenase, which transforms it to isovaleryl-CoA. Isovaleryl-CoA is consequently metabolized by isovaleryl-CoA dehydrogenase and transformed to MC-CoA, which is used in the synthesis of acetyl-CoA and other compounds. Throughout biotin shortage, HMB can be manufactured from MC-CoA via enoyl-CoA hydratase and an unknown thioesterase enzyme, which convert MC-CoA into HMB-CoA and HMB-CoA into HMB respectively. A fairly small amount of α-KIC is metabolized in the liver by the cytosolic enzyme 4-hydroxyphenylpyruvate dioxygenase (KIC dioxygenase), which converts α-KIC to HMB. In healthy people, this small pathway– which includes the conversion of l-leucine to α-KIC and after that HMB– is the predominant path of HMB synthesis.
A small portion of l-leucine metabolic process– less than 5% in all tissues except the testes where it represents about 33%– is initially catalyzed by leucine aminomutase, producing β-leucine, which is subsequently metabolized into β-ketoisocaproate (β-KIC), β-ketoisocaproyl-CoA, and after that acetyl-CoA by a series of uncharacterized enzymes.
The metabolism of HMB is catalyzed by an uncharacterized enzyme which converts it to β-hydroxy β-methylbutyryl-CoA (HMB-CoA). HMB-CoA is metabolized by either enoyl-CoA hydratase or another uncharacterized enzyme, producing β-methylcrotonyl-CoA (MC-CoA) or hydroxymethylglutaryl-CoA (HMG-CoA) respectively.MC-CoA is then transformed by the enzyme methylcrotonyl-CoA carboxylase to methylglutaconyl-CoA (MG-CoA), which is subsequently transformed to HMG-CoA by methylglutaconyl-CoA hydratase. HMG-CoA is then cleaved into acetyl-CoA and acetoacetate by HMG-CoA lyase or used in the production of cholesterol via the mevalonate path.
Synthesis in non-human organisms
Leucine is an important amino acid in the diet plan of animals due to the fact that they do not have the complete enzyme path to manufacture it de novo from prospective precursor substances. Consequently, they should ingest it, typically as an element of proteins. Plants and microbes synthesize leucine from pyruvic acid with a series of enzymes:.
- Acetolactate synthase
- Acetohydroxy acid isomeroreductase
- Dihydroxyacid dehydratase
- α-Isopropylmalate synthase
- α-Isopropylmalate isomerase
- Leucine aminotransferase
Synthesis of the small, hydrophobic amino acid valine likewise includes the preliminary part of this path.
System of action
This group of essential amino acids are identified as the branched-chain amino acids, BCAAs. Due to the fact that this arrangement of carbon atoms can not be made by human beings, these amino acids are an essential element in the diet plan. The catabolism of all 3 substances starts in muscle and yields NADH and FADH2 which can be used for ATP generation. The catabolism of all 3 of these amino acids uses the exact same enzymes in the very first two steps. The first step in each case is a transamination utilizing a single BCAA aminotransferase, with a-ketoglutarate as amine acceptor. As a result, 3 various a-keto acids are produced and are oxidized using a common branched-chain a-keto acid dehydrogenase, yielding the 3 different CoA derivatives. Consequently the metabolic pathways diverge, producing many intermediates. The principal product from valine is propionylCoA, the glucogenic precursor of succinyl-CoA. Isoleucine catabolism terminates with production of acetylCoA and propionylCoA; thus isoleucine is both glucogenic and ketogenic. Leucine triggers acetylCoA and acetoacetylCoA, and is therefore classified as strictly ketogenic. There are a number of genetic diseases related to defective catabolism of the BCAAs. The most common defect remains in the branched-chain a-keto acid dehydrogenase. Since there is only one dehydrogenase enzyme for all 3 amino acids, all three a-keto acids collect and are excreted in the urine. The disease is called Maple syrup urine illness because of the characteristic odor of the urine in afflicted people. Mental retardation in these cases is comprehensive. Unfortunately, given that these are essential amino acids, they can not be greatly limited in the diet plan; ultimately, the life of afflicted individuals is short and development is unusual The main neurological issues are because of poor formation of myelin in the CNS.
Foods with leucine
Getting your leucine and other BCAAs from food is best for many people. The Food and Drug Administration does not manage supplements, so they may not include exactly what they state they do. They can have side effects or engage with other medications. Dietary sources are mainly safe, inexpensive, and good-tasting.
Nutrition labels for food do not note the individual amino acids, so many people must just be sure they are getting sufficient protein. Adults need about 7 grams (g) of protein per 20 pounds of body weight, so an individual weighing 140 pounds would require 49g.
Both plant and animal food can fulfill your protein needs. When animal foods were thought about exceptional for protein as they include all the important amino acids.
Dietitians now say that it is not essential to take in all the essential amino acids at one time. Instead, they can be spread over the course of a day, making it much easier for people who are vegan and vegetarian to fulfill the recommendations for protein.
There are lots of dietary sources for leucine and other BCAAs. Consider these healthy sources of amino acids:.
Salmon
Get your amino acids from salmon, and you’ll also get omega-3 fats. There are some health concerns about farmed salmon. Choose wild-caught or restrict your portions monthly.
Chickpeas
These nutritional super stars consist of 7g of protein and 6g of fiber in just half a cup, and they are high in iron, too. Enjoy them as hummus or include them to soups, stews, curries, and salads.
Brown rice
Try brown rice instead of white. You’ll get a nutty taste and a somewhat chewy texture that many people take pleasure in.
Eggs
Even the American Heart Association says that an egg a day is alright. You’ll get 6g of protein because egg.
Soybeans
This versatile legume is offered in a range of types, consisting of tofu, tempeh, edamame, and roasted soybeans. Today, texturized soy protein is readily available in grocery stores. It can substitute meat in lots of dishes.
Nuts
Almonds, Brazil nuts, and cashews are excellent sources of essential amino acids. So are peanuts, although they are technically vegetables instead of nuts.
Beef
Beef is among the very best sources of amino acids. To lower your intake of fats and cholesterol, pick a lean cut or try grass-fed beef.
Advantages
- Constructs Muscle
- Avoids Muscle Loss
- Enhances Performance
- Aids in Weight Loss
- Promotes Muscle Healing
- Supports Blood Sugar Level
Builds muscle
L-leucine is a popular supplement amongst bodybuilders and athletes due to its effective effects on muscle acquiring. As one of the essential amino acids associated with muscle synthesis, it may assist set off bodybuilding to enhance your workout.
However, research study has actually turned up blended outcomes on the prospective effects of this amino acid. One long-term study out of France, for example, discovered that leucine was a lot more reliable in promoting muscle growth and boosting performance when it was integrated with other amino acids rather than consumed alone. Including a great variety of protein foods in your diet plan can help make the most of the effects of leucine by providing a wide selection of amino acids and necessary nutrients to sustain muscle development.
Prevents muscle loss
As you age, there are a great deal of changes that take place in your body. Sarcopenia, the steady degeneration of skeletal muscles, is one of the most noteworthy effects of advanced age. This condition can trigger weakness and decreased endurance, leading to a decrease in exercise.
Leucine is thought to help slow muscle degeneration to decrease the effects of aging. One study carried out at the University of Texas Medical Branch’s Department of Internal Medication and released in Clinical Nutrition showed that it assisted improve muscle synthesis in older adults consuming the suggested quantity of protein per meal. Another human design, conducted in France and referenced above, had similar findings, reporting that leucine supplements was likewise able to limit weight reduction brought on by malnutrition in elderly individuals.
Enhances performance
In addition to using leucine for bodybuilding, both professional and novice professional athletes alike typically rely on this necessary amino acid wanting to bump their physical efficiency up to the next level.
One research study performed at the Institute of Sport and Exercise Science at James Cook University in Australia and published in the European Journal of Applied Physiology reported that taking leucine supplementation for 6 weeks substantially improved both endurance and upper-body power in competitive canoeists. Similarly, another research study released in the European Journal of Medical Nutrition in 2016 revealed that leucine supplements enhanced lean tissue mass and improved functional efficiency in older adults.
Help in fat loss
If you’re seeking to construct muscle and at the same time shed some additional body fat, leucine may be just what you need. In fact, numerous studies have discovered that it can have some effective effects when it comes to fat loss.
An animal model out of the University of São Paulo’s Department of Food Science and Speculative Nutrition in Brazil revealed that supplementing rats with a low dosage of leucine for a six-week period led to increased weight loss compared to a control group. According to a 2015 review in Nutrients, this amino acid has likewise been revealed to decrease fat build-up during aging and avoid the development of diet-related weight problems.
Promotes muscle recovery
Cramps and aching muscles are bothersome problems that many individuals face after hitting the fitness center. Following a specifically extreme exercise, these muscle pains can sometimes even be enough to keep you from the fitness center a few days, completely throwing off your schedule and postponing your fitness objectives.
Studies have found some promising outcomes on the potential role of leucine in muscle recovery. An evaluation from the Department of Food Science and Human Nutrition at the University of Illinois reported that consuming leucine right after working out can assist promote muscle recovery and muscle protein synthesis. Another research study carried out at the School of Sport and Exercise and the Institute of Food, Nutrition and Human Being Health at Massey University in New Zealand showed that supplements with this amino acid improved recovery and boosted high-intensity endurance performance in male bicyclists after training on successive days.
Supports blood glucose
Hyperglycemia, or high blood sugar, can ruin your health. In the short term, high blood sugar level can trigger symptoms like fatigue, unintended weight reduction and increased thirst. Left uncontrolled for even longer, high blood glucose can have even more severe repercussions, consisting of nerve damage, kidney issues and a higher danger of skin infections.
Some research study recommends that leucine might have the ability to assist preserve normal blood sugar levels. A human research study out of the VA Medical Center’s Endocrine, Metabolic process and Nutrition Area in Minneapolis and released in Metabolism revealed that leucine taken together with glucose assisted promote insulin secretion and reduce blood glucose levels in participants. A 2014 in vitro research study out of China also revealed that leucine had the ability to facilitate insulin signaling and glucose uptake to help keep blood sugar levels in check.
Leucine side effects and risks
You may experience leucine side effects with a supplement, which is one reason it’s usually best to get your nutrients from entire foods.
According to the University of Rochester Medical Center, taking leucine supplements can have a variety of unwanted effects.
- Unfavorable nitrogen balance A single amino acid supplement may cause you to have an unfavorable nitrogen balance, which can minimize how well your metabolism works and trigger your kidneys to have to work harder.
- Hypoglycemia Really high dosages of leucine may cause low blood glucose.
- Pellagra Really high dosages of leucine can likewise cause pellagra, symptoms of which include loss of hair, gastrointestinal issues and skin lesions.
In general, supplements ought to not change healthy, complete meals and it is very important to eat a variety of foods, per the U.S. Food & & Drug Administration. Integrating supplements, utilizing supplements with medicines or taking a lot of supplements can trigger hazardous results. Your healthcare professional can assist you decide if you need leucine supplements and guide you in striking a healthy balance in between the foods and nutrients you need.
Leucine deficiency
Leucine deficiency leads to impaired functioning of muscles and the liver. Due to leucine deficiency, the body experiences severe tiredness. Leucine deficiency may cause particular signs. Some of these signs include:.
- Fatigue
- Poor muscle gain
- Poor wound recovery
- Weight gain
Leucine shortage prevails in individuals who struggle with eating conditions like bulimia and anorexia. Likewise, an unbalanced diet plan can lead to leucine deficiency. For example, it arises from taking in more fast foods and not enough protein. In addition, sometimes individuals who tend to be under pressure and emotional tension due to long working hours may need more leucine. Such way of life problems likewise lead to a shortage.
Studies suggest that extensive aerobic activity and strength training might increase the daily consumption of leucine. There are recommendations to increase the currently advised use of leucine from 14mg/kg body weight daily to 45 mg/kg body weight in inactive grownups. It needs to go up for people who enjoy extensive exercise and strength train for better protein synthesis. Otherwise, it impacts their muscle strength and efficiency. In addition, individuals with liver conditions are prone to leucine shortage. Therefore, individuals from these classifications need high levels of leucine. To sum it up, lutein assists fix tissues, recover injuries, construct muscles, muscle repair and prevention of muscle loss.
Illness trigger by shortage of leucine
Maple syrup urine disease (MSUD) is an uncommon congenital disease defined by shortage of an enzyme complex (branched-chain alpha-keto acid dehydrogenase) that is needed to break down (metabolize) the three branched-chain amino acids (BCAAs) leucine, isoleucine and valine, in the body. The result of this metabolic failure is that all 3 BCAAs, together with a variety of their hazardous by-products, (particularly their respective organic acids), all build up unusually. In the traditional, serious kind of MSUD, plasma concentrations of the BCAAs begin to rise within a few hours of birth. If without treatment, symptoms start to emerge, typically within the first 24-48 hours of life.
The discussion begins with non-specific symptoms of increasing neurological dysfunction and include lethargy, irritability and bad feeding, quickly followed by focal neurological indications such as unusual movements, increasing spasticity, and shortly afterwards, by seizures and deepening coma. If untreated, progressive mental retardation is inescapable and death happens normally within weeks or months. The only specific finding that is special to MSUD is the development of a particular odor, reminiscent of maple syrup that can most easily be detected in the urine and earwax and may be smelled within a day or more of birth. The toxicity is the result of damaging impacts of leucine on the brain accompanied by serious ketoacidosis triggered by accumulation of the three branched-chain ketoacids (BCKAs).
The disorder can be effectively handled through a specialized diet in which the 3 BCAAs are carefully controlled. Nevertheless, even with treatment, patients of any age with MSUD remain at high threat for establishing severe metabolic decompensation (metabolic crises) typically activated by infection, injury, failure to consume (fasting) or even by mental tension. During these episodes there is a fast, abrupt rise in amino acid levels requiring immediate medical intervention.
There are three or possibly four types of MSUD: the traditional type; intermediate type, intermittent type, and potentially a thiamine-responsive type. Each of the different subtypes of MSUD have various levels of recurring enzyme activity which represent the variable severity and age of onset. All types are acquired in an autosomal recessive pattern.
Leucine dose
Leucine dosage is a debatable topic. Intake of 2.5 grams of Leucine has revealed increase in MPS. Some researchers’ advice an overall consumption of 10 grams of Leucine daily divided across meals.
The very best method to consume Leucine is to include it as a intra exercise in the form of BCAAs. 5 grams can be taken in as intra workout and within 30 minutes of exercise, 10 grams will be consumed.However it has to be discovered that if whey is your post exercise shake, it does have greater levels of Leucine (100 grams has 10 grams Leucine). Leucine must be the part of every meal and ideally every meal should include a minimum of 2.5 grams of Leucine.
Herb-drug interactions
- Insulin and other antidiabetic medications: Leucine can stimulate insulin secretion and may have additive hypoglycemic impacts.
- Vitamins B3 and B6: Leucine can hinder synthesis of these vitamins.
- PDE5 inhibitors (sildenafil): Animal designs suggest leucine might have synergistic effects. Medical relevance is not known.
Special precautions and warnings
- Pregnancy and breast-feeding: There is not enough dependable info about the safety of taking branched-chain amino acids if you are pregnant or breast feeding. Remain on the safe side and prevent use.
- Kids: Branched-chain amino acids are perhaps SAFE for kids when taken by mouth, short-term. Branched-chain amino acids have been used safely in children for approximately 6 months.
- Amyotrophic lateral sclerosis (ALS, Lou Gehrig’s disease): Using branched-chain amino acids has actually been related to lung failure and greater death rates when utilized in patients with ALS. If you have ALS, do not utilize branched-chain amino acids until more is known.
- Branched-chain ketoaciduria: Seizures and extreme psychological and physical retardation can result if intake of branched-chain amino acids is increased. Do not use branched-chain amino acids if you have this condition.
- Chronic alcoholism: Dietary use of branched-chain amino acids in alcoholics has been associated with liver disease resulting in brain damage (hepatic encephalopathy).
- Low blood sugar in infants: Intake of among the branched-chain amino acids, leucine, has actually been reported to lower blood sugar level in babies with a condition called idiopathic hypoglycemia. This term implies they have low blood sugar level, however the cause is unknown. Some research study recommends leucine triggers the pancreas to release insulin, and this decreases blood glucose.
- Surgical treatment: Branched-chain amino acids may affect blood sugar levels, and this might interfere with blood glucose control during and after surgery. Stop using branched-chain amino acids at least 2 weeks before a scheduled surgery.
Conclusion
Really high concentrations of leucine have the capability to promote protein synthesis and prevent protein degradation in skeletal muscle of undamaged rats. This result on protein synthesis might be boosted by the transient but little increase in serum insulin that is caused by the leucine dosage. However, within the regular physiological concentration range of leucine and insulin in food-deprived and fed rats, the sensitivity of muscle protein synthesis to insulin is boosted by infusion of leucine, so that protein synthesis is stimulated by the moderately raised concentrations of insulin and leucine that are normal of the fed rat. The physiological role of leucine is therefore to deal with insulin to trigger the switch that promotes muscle protein synthesis when amino acids and energy from food appear. The benefit of this mode of policy is that the switch needs both amino acids (leucine) and energy (insulin) to be present all at once, so is just activated when conditions are perfect. | https://theimperfectboss.com/glossary/leucine/ |
Amplitude and phase dynamics in oscillators with distributed-delay coupling.
This paper studies the effects of distributed-delay coupling on the dynamics in a system of non-identical coupled Stuart-Landau oscillators. For uniform and gamma delay distribution kernels, the conditions for amplitude death are obtained in terms of average frequency, frequency detuning and the parameters of the coupling, including coupling strength and phase, as well as the mean time delay and the width of the delay distribution. To gain further insights into the dynamics inside amplitude death regions, the eigenvalues of the corresponding characteristic equations are computed numerically. Oscillatory dynamics of the system is also investigated, using amplitude and phase representation. Various branches of phase-locked solutions are identified, and their stability is analysed for different types of delay distributions.
| |
Through the development, implementation and evaluation of two pioneering collaborative courses for Faculty of Arts and Science Basic Medical Sciences students, we will introduce biomedical research through an innovation lens. JPM300H – Research Readiness and Advancing Biomedical Discoveries and JPM400Y – Biomedical Incubator Capstone Project will be developed bythe Departments of Pharmacology & Toxicology, Biochemistry, and Physiology for Major and Specialist program students.
In JPM300H, via online modules and in-class active learning sessions, students will explore how scientists work to develop, manage and commercialize biomedical discoveries. Content will focus on the development of key research and industry skills required for future success: project and budget oversight, data integrity, and the ability to communicate to diverse audiences. Developed online modules will be shared broadly with members of the UofT community, allowing for hundreds of students beyond the course to benefit.
Students interested in honing their skills will be eligible for JPM400Y, where they will work as a collaborative interdisciplinary biomedical innovation team on a research proposal in a simulated workplace/start-up setting. Under mentor supervision, students will apply project and budget management principles and practice research, business and communication workplace skills necessary for their future careers. | https://www.leaf.provost.utoronto.ca/advancing-biomedical-discoveries-through-experiential-and-integrated-learning/ |
This highly interactive onsite course is designed to enable students to assess, develop, and apply Emotional Intelligence skills to their executive management roles, in order to become better, more effective leaders. More than a management style or a quick fix recipe, Emotional Intelligence is the foundation for success for anyone who aspires to effective leadership and reduced interpersonal friction. Participants will learn techniques to control behavior and unproductive responses in themselves and others. They will learn how to align effort with desired goals to create positive outcomes. They will learn how to control behavior when faced with adverse circumstances by choosing a response that will bring them closer to their desired goals. They will be able to exercise Emotional Intelligence in order to create positive outcomes in spite of negative emotions.
Offered: March 9th-12th, 2008 (St. Louis, MO.)
- Teacher: JM Support (retired) - Ken Gibson, M.S. | http://www.justicedegree.org/course/info.php?id=167 |
The invention belongs to tattooing equipment Chinese eyebrows and eye shadow tools, and particularly discloses a tattooing needle with a specific depth, which comprises a needle a, a needle b and a needle c which are sequentially arranged, the needle a, the needle b and the needle c have a height difference in the vertical distance in the needle tip piercing direction, the needle b is 0.1-1mm lower than the needle a, and the needle c is 0.1-1mm lower than the needle b. Products and material substances can be controlled to enter a precise target level of the skin, so that the operation is simpler and safer; and redundant application force in a certain range can be effectively prevented, so that the operation force range of an operator is enlarged, the piercing depth range is almost consistent, and the operator can master the technical capability more easily. | |
Most of us in New England work in buildings with mechanical heating, cooling, and ventilation (HVAC) systems.
These systems are designed to provide comfort and quality air, free from harmful pollutants. Ventilation is more than just moving air around, it’s quit complicated, Introducing outside air to heating and cooling air, distributing it throughout your building along with air already in your building.
Our job @ M and R Mechanical Services is to make you comfortable at all times, we understand this complicated process and will work closely with you to ensure that your HVAC systems are functioning properly and efficiently to promote the highest levels of indoor air quality. | https://mandrmechanical.com/service/air-quality-ventilation/ |
Best-soft-sugar-cookie-recipe-with-cream-cheese, remove the top layer of parchment paper, then cut the cookies as desired. transfer to the baking sheet, and repeat with the remaining dough. bake the cream cheese sugar cookies for 9-11 minutes at 350 degrees f, until pale golden at the edges.. In a large mixing bowl, cream together the butter, cream cheese, sugar, and vanilla until light and fluffy, about a full 3-5 minutes. add the egg and mix until evenly combined., in a large bowl, combine the sugar, butter, cream cheese, salt, almond and vanilla extracts, and egg yolk. beat until smooth. stir in flour until well blended..
Looking for recipes for cream cheese sugar cookies? taste of home has the best cream cheese sugar cookie recipes from real cooks, featuring reviews, ratings, how-to videos and tips., cream cheese sugar cookies are a perfectly soft, melt-in-your-mouth cutout cookie! cut them into any shape you please but make sure to top them off with a generous topping of cream cheese frosting!. My mom’s sugar cookies are famous. for real. they’re soft and cakey with the very best cream cheese frosting for cookies – it’s way better than the tasteless “royal” icing that’s often used to decorate sugar cookies or the pasty pink frosting on those store bought ones. she makes them for every holiday and i don’t know how many people have requested the recipe over the years., cream cheese sugar cookies with almond extract. although this sugar cookie recipe calls for both vanilla and almond extract, you can use all vanilla if you’d like. many times when i substitute almond extract for vanilla extract (or add a touch of both), people will ask for the cookie recipe..
In a medium bowl, cream together the butter, shortening and sugar. stir in the eggs and vanilla. combine the flour, baking powder and salt, stir into the creamed mixture until dough comes together. roll dough into walnut sized balls and roll the balls in sugar., place ¼ cup sugar in a small bowl. shape dough into balls, the size of 1.5 tablespoons each (i use this ice cream scoop for that), then roll them in the sugar and place on the cookie sheets, spacing them 2-inches apart. bake for 11-12 minutes until cookies just begin to brown at the edges, but the center is still soft (you don't want to overbake them)..
Perfectly soft sugar cookie recipe. this sugar cookie recipe is absolute perfection with a perfectly soft sugar cookie every single time. it's the perfect easy sugar cookie recipe for every occasion! | http://paketwisata.online/best-soft-sugar-cookie-recipe-with-cream-cheese |
Looking at New Ways to Support Patients and Caregivers Living with Dementia in Rural and Remote Saskatchewan
Those who are living with dementia in Saskatchewan’s rural and remote areas face many challenges, beginning with receiving a proper diagnosis, and continuing with disease management, care and support. Dementia is a progressive disease with no cure. It affects a person’s memory and other thinking abilities to an extent that it can interfere with daily life and one’s ability to participate in social activities. The impact on caregivers’ health and wellness is also a concern.
This past year the Public Health Agency of Canada released our country’s first national strategy on dementia. The second phase of the Canadian Consortium on Neurodegeneration in Aging (CCNA) was also announced, and with this national work came a reinvestment in fostering collaborations for research in the areas of prevention, treatment and quality of life.
“The Strategy draws attention to the barriers to dementia care that are faced by people living in rural and remote communities, and our team continues to focus our research efforts on the specific needs of this population,” says Dr. Debra Morgan.
Saskatchewan Health Research Foundation (SHRF) has long been a supporter of the Rural Dementia Action Research (RaDAR) team and has reinvested in Phase II of the CCNA. The RaDAR team made great strides in Phase I of the CCNA to build capacity for rural and remote dementia care with rural memory clinics that bring together a team of health care professionals and supports for dementia diagnosis and management.
Now in Phase II, as they continue to work with these rural teams to operationalize best practices in primary health care for dementia in ways that are feasible and sustainable in rural settings, the team is also focusing on how to deliver a suite of interventions tailored to individual families’ needs – supports for not only the patients, but also the caregivers.
Dr. Megan O’Connell is co-lead of the ‘Issues in Dementia Care for Rural Populations’ team in Phase II. She is an active member of the rural teams as a clinician and has seen first hand what needs are not being met.
“Interventions are a missing puzzle piece that will help support not only patients and caregivers, but rural primary health care providers in providing support to rural patients,” says O’Connell.
Many of these interventions will be made possible because of the novel approach of using Saskatchewan’s Telehealth network for delivery.
“I think we are uniquely situated to do this work because of our province’s investment in our Telehealth network,” comments O’Connell. “It doesn’t matter where you live, you see the specialist you need to see and who knows how to help you if you have dementia, and I really see that as exciting. Saskatchewan can be at the forefront of this way of delivering support.”
About the Suite of Interventions
Cognitive Rehabilitation
Cognitive rehabilitation is an individualized, person-centred therapy that helps people achieve personal goals that will improve everyday functions and activities. It’s about helping people with issues around learning, memory, perception and problem solving in our day-to-day life. This personalized approach will be delivered via Telehealth and the research will look at the impact on mood, quality of life and satisfaction with achieving these personal goals for those living with dementia and their caregivers.
Cognitive Behavioural Therapy for Insomnia Adapted to Dementia
If you’ve ever suffered from sleep disturbances, you know that this can impact your quality of life in profound ways. Chronic sleep disturbance is common for those living with dementia and their caregivers. Cognitive behavioural therapy (CBT) is an effective way to change patterns of thinking and behaving that are negatively impacting our lives. Although this treatment for insomnia has been adapted for Telehealth and for persons with cognitive impairment, it has not been adapted for use in persons with dementia. Delivered via Telehealth, the research team will look at introducing behaviour changes to positively impact sleep, mood and quality of life. Another benefit of this research will be the improved access to this effective treatment for rural dementia patients and caregivers.
Driving Cessation
“When we talk about a diagnosis of dementia, it can be upsetting. When we talk about the implications of that diagnosis on driving, that gets really upsetting, and I think this has a lot more meaning for those living in a rural setting,” says O’Connell.
For those who have been asked to stop driving, it can affect their psychological health and sometimes even lead to depression. It can cause stress for all involved, including families and health care providers.
“One of the things that the driving team we are working with [Team 16 from the CCNA] have said is that we have all these education pieces for people and families, but they’re still not dealing with the psychological factors,” explains O’Connell. “So, we are going to work with the team and use problem solving therapy and adapt it for Telehealth delivery in a rural setting.”
Social Inclusion
Social support for those living with dementia is very important to both the patient and the caregiver, but it is also something that often falls away fairly quickly after facing a diagnosis. Caregivers can feel increasingly isolated and don’t feel like they can ask for help from those around them.
Formal social support interventions, like a support group, are only one piece of the puzzle. People don’t always understand their social networks and biases – how it works for them, how it doesn’t work and how to make changes to make it work better. This research will look at training people to understand their social support network and how to engage with it differently or have different views about it to better provide the support they need. This intervention will likely involve how to deal with the stigma of dementia.
RuralCARE app
Using an app that was co-designed by urban caregivers of persons with dementia, the team will adapt this app for caregivers living in rural Saskatchewan. Working with an existing Telehealth support group developed by O’Connell, this app will provide more opportunities for contact and support among the group, creating a virtual community of support between regular Telehealth meetings.
“As baby boomers age, I see this as the future,” comments O’Connell. “I also see this as a careful step, as we want to ensure this is a positive impact on mental health.”
Indigenous Caregiver Support
The only way to provide support that is meaningful and welcoming for Indigenous caregivers of those living with dementia is to co-design and create it with them.
“What will an Indigenous support group look like? How will we deliver it? Hopefully our collective goal will be something that is created by and for Indigenous caregivers that can be delivered province-wide and be something caregivers can access that feels safe and meaningful and created in a way that works for them,” says O’Connell.
The team will be working with the CCNA ‘Issues in Dementia Care for Indigenous Populations’ team and with the community and the File Hills Qu’Appelle Tribal Council in southern Saskatchewan.
“Adding these supports is really exciting and means our memory clinics will deliver a more complete package, bringing together a triad of approaches for diagnosis, management and interventions,” comments Morgan about the potential these new interventions will offer rural and remote patients and caregivers.
O’Connell reflects, “Nothing can change the course of dementia. However, you can change people’s quality of life, which can make some profound differences in people’s lives.”
Learn more about the Rural and Remote Memory Clinic – Interventions here. | https://www.shrf.ca/post/beyond-a-diagnosis |
Developing literacy presents certain challenges in remote and/developing contexts for a range of reasons; however, these reasons are not insurmountable, though they do present significant obstacles.
If I need to start somewhere, I will cite the lack of a literate tradition as one factor to consider. In non-remote contexts, children are exposed to literate behaviour in a range of forms from a very early age. A literate sensibility is reinforced in literate environments. And a literate environment is one which is stacked with literate artefacts (e.g. books, magazines, lists on refrigerators) and populated by readers and writers. However, children in remote communities are growing up in environments with few age-appropriate books and fewer role models who exhibit the diverse habits of a literate individual.
Furthermore - in remote contexts - it is often the case that learners are brought into literacy in a language that is not their mother tongue. If early literacy experiences were about rendering in print that which is spoken, then English language learners face additional barriers to see the relationship between speech, writing and reading. In addition to the language barrier, there is also the cultural barrier. It is understood that readers are better able to engage and understand what they read when they have the prior knowledge/experience/schema to find the reading meaningful, not to mention access to an experienced reader to aid reading. However, it is often the case that the learners are exposed to texts which (a) use an unnatural (or unfamiliar) flow of language and (b) do not connect with the learner’s experiences (or desire to attach meaning). Whilst these texts may provide “exercises in reading”, there is some doubt as to the meaning being extracted from such text. A child might “read” the text, but does the child understand what he or she is reading?
We also must be mindful that any nation would like the populous to read certain texts (e.g. government notices) in certain ways (e.g. with a clear idea of intention) when certain readers may not have access to the education, experiences, technologies and relationships which enable the readers to engage in the content and assumptions of such discourse. Such learners may learn to read; however what the learners can read, comprehend and act upon is not consistent with the intentions of authorities or of the curriculum. And since purposeful reading and writing should be purposeful and meaning, we must acknowledge that such literacy must be linked to the lived experience if the community, which may be very different from the lived experiences of policy makers, curriculum developers, and even classroom teachers.
There are many other factors to consider. We cannot ignore the cognitive impacts that poverty, malnutrition, and trauma can have on the time it takes for one to process information and to perform tasks. We must acknowledge how socioeconomic status "has been identified as a causal factor in poor social, cognitive, and physical health outcomes, and as influencing specialisation of the brain's left hemisphere for language." (Zhang, et al., 2013, p. 665) We cannot ignore the fact some children grow up in overcrowded conditions without the spaces to learn and in environments where they are exposed to harm, disruption and/or deprivation. In fact, ensuring equity in one’s opportunity to learn - in this case - literacy requires planning bodies to take measures that recognise the impacts of disadvantage and to implement measures to enhance opportunities for learning, whether this involves modifications to the curriculum or enhancements to the experiences offered to learners.
At the end of the day, a learner’s pathway is one that requires the fostering habits of mind and opportunities for meaningful practice that permit one to develop a form of life within a stream of living. It requires engaged time with quality teachers using quality resources in safe, supportive environments that have established deep partnerships amongst family, community and industry to take great strides to foster the skills, habits and knowledge as well as material structures so learners can develop practices and find supported identities therein. It is not enough to establish effective teaching programs; one must also negotiate with industries, institutions and the great hurly burly of life so that learners can find and develop outlets for practices which are sustainable and supportive. | https://www.theliteracybug.com/journal/2014/3/7/developing-literacy-presents-certain-challenges-in-remote-contexts |
The British Residency, which stands witness to the Revolt of 1857 must feature in the list of the most important Places to visit in Lucknow. The Residency Complex consists of several individual structures and is not only rich in history, but it is also an everlasting example of British architectural skills.If you are a history enthusiast and you want to relive the Sepoy Mutiny then a visit to the Residency Building in Lucknow is a must. Although, most of the structures are in ruins now, you can still visit the cemetery, the church and the graves of people who died in the Siege.Address; Mahatma Gandhi Marg, Deep Manak Nagar, Qaiserbagh.Timing: -The Residency remains open on all days except on Mondays and-Holidays from 10:00 am to 5:00 pm.Entry fee: -The entry fee is Rs. 15 for Indian citizens and-Rs. 200 for foreigners.
When you look for important Places to visit in Lucknow, then you will come across a place called ‘Chota Imambara’. This too is a historical spot built by Muhammad Ali Shah, the Nawab of Awadh in 1838.The Chota Imambara is an architectural marvel and its unique features are the chandeliers that adorn the structure -thereby goes the name as the Palace of Lights. Bought by the Nawab from Belgium these chandeliers are lit up during special festivals like Muharram. It is yet another one of the best Places to see in Lucknow.Timing:The Imambara remains open on all days from 6:00 am to 5:00pm. Best time to visit 9:00 - 10:00 a.m.Address:West of Bara Imambara, Lucknow, 226001.Entry Fee: The entry fee to this place is -Rs.25 for Indian visitors and -Rs. 300 for foreigners.
You definitely cannot give this unique architectural marvel a miss while selecting the most important Places to visit in Lucknow. The Rumi Darwaza is a gateway between the Bara Imambara and the Chota Imambara. It is one of the few remaining examples of Awadhi architecture featuring a huge gateway that is sixty feet tall.This gate has been named after the famous thirteenth century Muslim Sufi mystic, Jalal ad-Din Muhammad Rumi. The Rumi Darwaza is one of most famous Tourist attractions in Lucknow which is accessible by the public all day.Address:17/11, Hussainabad Road, Lajpat Nagar Colony.Best time to visit: -You should try visiting it when the weather is mild and not when it is extremely hot or cold.-You should also visit it with time on hand, because it takes nearly an hour to see and understand the beauty of the structure. | https://www.thrillophilia.com/cities/lucknow/tags/sightseeing |
Zion Besieg'd and Attack'd, 1787.
In 1776, Pennsylvania adopted the most democratic state constitution in the new United States. A one-house legislature, elected by all male taxpayers, would rule, and the governor would have little real power. When he heard about Pennsylvania's radical constitution, John Adams exclaimed, "Good God! The people of Pennsylvania in seven years will be glad to petition the Crown of Britain for reconciliation in order to be delivered from the tyranny of their new Constitution."
As the American Revolution dragged on, the new Pennsylvania constitution became increasingly unpopular Benjamin Rush and the emerging opponents found it "big with tyranny." So intense was the political strife that the two parties formed - the Constitutionalists, who supported the radical government, and the Republicans, who opposed it - and ran slates of candidates, campaigned vigorously, and voted en bloc in the state legislature.
At first, Joseph Reed, George Bryan, David Rittenhouse, William Findley, Robert Whitehill, and other Constitutionalist leaders jealously guarded their newly-won authority, with mixed results. They assumed control of the public lands owned by the Penn family and passed an act for the gradual abolition of slavery.
Joseph Reed, Governor of Pennsylvania, 1778-1781.
At the same time, however, they also imposed a strict wartime loyalty oath. Those who refused to swear it - including Quakers, Moravians, and other Pennsylvanians whose religious beliefs forbade them from taking oaths - could not vote, serve on juries, sue in court, buy or sell real estate, or bear arms. When the war ended, the Constitutionalists could no longer thrive on their emotional appeals to the patriot cause, and their poor administration of the government resulted in their removal from power.
Examples of the divisions within the state abounded. In 1779, Charles Willson Peale and Timothy Matlack had to defend James Wilson against their own followers, who attacked his Philadelphia house because like many members of the elite, he retained a sympathy and friendship for loyalists. Neither Congress nor the national government could pay the soldiers of the Pennsylvania Line: the first to answer Congress' call to join General Washington outside Boston in 1775 and form the Continental Army, they were also the first to mutiny on a large scale in 1781 and again in 1783.
Pennsylvanians refusal to pay taxes and the inability of the government to collect them accompanied the federal government's failure to raise money. In 1781, state president Joseph Reed had to pardon the troops and promise them payment, which they received later in the decade in the form of western lands. In 1783, rambunctious troops chased Congress out of Philadelphia. Nor could either the federal or state government protect settlers in western Pennsylvania. In 1782, Indians burned Hanna's Town, the intended capital of Westmoreland County.
The federal pillars, Massachusetts Centinel, January 16, 1788.
After 1786, the Republicans assumed control of Pennsylvania government, and changed their name to Federalists. Led by Benjamin Franklin, John Dickinson, James Wilson, George Clymer and Robert Morris, they quickly repealed the Constitutionalists' loyalty oath, restored suffrage to property-owning citizens who had been disenfranchised, and championed the movement to strengthen the national government by revising the Articles of Confederation, which governed the new nation.
On May 25, 1787, fifty-five delegates from twelve states (the exception being Rhode Island) convened at the Pennsylvania State House, called to amend the Articles, but it was an open secret that they, and the Congress and state legislatures that authorized the meeting, intended to create a new frame of government. The new constitution they drafted was based on the principle of "federalism," or the division of powers between the state and national levels. The new federal government would consist of three branches: a legislature, which made the law; an executive, which enforced the law; and a judiciary, which interpreted the law. To prevent an excessive accumulation of power within any single branch, the framers created a system of checks and balances among the three branches.
he and two other Pennsylvania delegates, Robert Morris and Thomas Fitzsimmons, were immigrants themselves.
Sympathetic to the needs of both large and small states, John Dickinson, who had served as president of both Pennsylvania and Delaware, helped arrange the compromise that granted equal representation to the states in the Senate, and representation by population in the House of Representatives. Benjamin Franklin, on several occasions, persuaded the overheated delegates - not only did tempers flare, but they were literally sweltering in a brick building with the windows closed to preserve secrecy - to calm down. His final speech circulated throughout the nation, made a strong case for compromise in the interest of unity.
Dickinson's success appears in the fact that Delaware and Pennsylvania were the first two states to approve the constitution. Pennsylvania's approval, however, was problematic. The state legislatures could not vote on the Constitution; they had to call ratifying conventions open to all adult males, thus simulating the agreement required by philosopher John Locke for people to leave a state of nature and form a government. But the Pennsylvania state legislature required a two-thirds majority to meet, legally, and the Federalists (former Republicans) lacked the two representatives they needed to muster such a majority.
The anti-Federalists (former Constitutionalists) decided to simply not show up. So a Federalist crowd sought out two of these assemblymen, physically carried them into the legislative chamber, and kept them there until the vote passed. Pennsylvania's convention then voted 2-1 to approve the Constitution, but anti-federalists Robert Whitehill and William Findley first voiced what would become the main objection to the document: it lacked security for popular rights. Their points, echoed throughout the state conventions, ultimately were distilled into the Bill of Rights, the first ten amendments to the Constitution.
While the delegates to the Constitutional Convention were working on a new frame of national government, people excluded from political life in Pennsylvania organized to obtain equal rights and call the nation's leading statesmen's attention to their plight. In 1787, Philadelphia's Jewish congregation, Mikveh Israel, petitioned the Constitutional Convention asking that the Constitution not bar people from political participation for their religious beliefs: Jews could not take the Christian oath required by the Pennsylvania Constitution of 1776. They were pleased to learn the Convention never even considered such a restriction. The Free African Society and Pennsylvania Abolition Society, both founded in 1787, unsuccessfully fought for an immediate end to the slave trade. African Americans did, however, receive the legal right to vote in Pennsylvania, unlike women, a group of whom would demand full political rights in 1794, and insist on being called "Citess," following the example of their French sisters during that nation's revolution.
In 1790, the Federalists finally replaced the 1776 state constitution with a more conservative frame of government that included a Senate, a powerful governor - he had the veto power and could appoint many state officers - and independent judiciary. Pennsylvania's 1790 Constitution also ended the oath that excluded Jews from voting in state elections. The federal Constitution of 1787 stipulated that a new federal district would become the national capital, but as the nation's largest and most centrally located city, Philadelphia from 1790 to 1800 served as the capital of both Pennsylvania and the federal government. In 1790, in return for approving Secretary of the Treasury Alexander Hamilton's financial program, southern delegates secured a federal district as of 1800, the future Washington, D.C.
For most of that decade, a period known as the Federalist era, Pennsylvania and national politics were inextricably intertwined and exceptionally tumultuous. The state endured two "rebellions" - the Whiskey and Fries - and was a closely contested battleground for the two political parties. It was also the center of the publishing industry that fueled the nationwide disputes. Despite the two uprisings and a presidential election in 1800 that might have resulted in violence had John Adams won, Pennsylvania Governor Thomas McKean let it be known that he would muster the state militia to install Jefferson in the White House - both the nation and the state emerged peaceful and prosperous as the nineteenth century began. | http://explorepahistory.com/story.php?storyId=1-9-16&chapter=1 |
This letter of recommendation advocates adding a flag that identifies the probability that a given object’s photometric redshift is not trustworthy. There are several approaches to this in existing codes and the literature. A new approach that we use as an example implementation here is that of Broussard & Gawiser (2021) where, after running a standard photo-z code, a separate neural network classifier (NNC) is trained and used to estimate the relative confidence that each object has an accurate photometric redshift. It is capable of greatly improving the outlier fraction and standard deviation of the resulting photo-z sample with only a small increase in the normalized median absolute deviation when retaining ~1/3 of the original sample, and it outperforms similar cuts made with reported photo-z uncertainties.
1. Scientific Utility
This NNC selection method is designed with a tomographic large scale structure analysis in mind, but the extensive utility of selecting particularly accurate photo-z’s makes it useful for a large number of science applications. The NNC output confidence values could be calibrated to a statistical probability and used to generate a flag indicating that a particular galaxy has a high (e.g., >95%) confidence of an accurate photo-z fit, defined in Broussard & Gawiser (2021) as having (z_phot - z_spec)/(1 + z_spec)<0.10.
2. Outputs
This method would require the outputs of a separate initial photo-z code as training data, and can flexibly accommodate any number of descriptive statistics (though the inclusion of at least the point redshift estimate and its Gaussian uncertainty is recommended). In turn, it would produce a confidence value between 0 and 1, with 0 representing a strong confidence in an inaccurate fit and 1 indicating a strong confidence of an accurate fit.
3. Performance
The NNC selection method does not have any particular photo-z performance requirements. We find that the NNC is capable of producing an improved sub-sample regardless of photo-z fit quality, though it yields better overall results as the quality of the initial photo-z fits improves. A sample of at least ~50,000 spectroscopic redshifts for detected objects is needed for training.
4. Technical Aspects
Scalability - Will Meet
The NNC produces nearly identical results for training samples of more than 50,000 objects. Spectroscopic data sets in Hyper Suprime-Cam fields already meet this criterion, with deep spectroscopic coverage expected in the Deep Drilling Fields also.
Inputs and Outputs - Will Meet
All inputs are catalog-level and the numerical confidence values can be directly added to the catalog or processed into a Boolean flag.
Storage Constraints - Will Meet
This method requires no additional storage beyond the photometric data and outputs of initial photo-z fits.
External Data Sets - Will Meet
This method has already been demonstrated using spec-z’s from various spectroscopic surveys compiled by the Hyper Suprime-Cam team. These or other spectroscopic surveys could be used to train the NNC when deployed for the LSST.
Estimator Training and Iterative Development - Will Meet
Broussard and Gawiser (2021) demonstrates the capability of the NNC in classifying galaxies using a boundary of delta z /(1+z) < 0.10. While we do not anticipate the need for major revisions to this training boundary, it may be useful to tune it prior to full deployment for the LSST.
Computational Processing Constraints - Will Meet
Due to the relatively small necessary training sample size and ability of neural network algorithms to train in epochs, a large amount of memory is not required to train or apply the NNC.
Implementation Language - Will Meet
The NNC is implemented using the Keras software package. Keras is a machine learning API that implements TensorFlow for the neural network itself. Tensorflow is written in a combination of Python and C++. | https://community.lsst.org/t/lor-neural-network-classifier-for-pz-outlier-rejection/5908 |
Delay in civil suit
Introduction:
One of the most vexed and worrying problems in the administration of civil justice is of delay. Jonathan swift in his famous work Gulliver’s Travels sarcastically describes the delay in courts in the following words: “In pleading, the lawyers studiously avoid entering into the merits of the cause; but are loud, violent and the tedious in dwelling upon all circumstances which are not to the purpose…. they never desire to know what claim or title my adversary hath to my cow, but whether the said cow were red or black; her horns long or short; whether the field I graze her in be round or square; whether she were milked at home or aboard; what disease she subjected to; and the like; after which they consult the precedents adjourn the cause from time to time, and in ten, twenty or thirty years come to an issue. It is likewise to be observed that this society hath a peculiar cant and jargon of their own, that no other mortal can understand, and wherein all their laws are written; which they take special care to multiply; whereby they have wholly confounded the very essence of truth and falsehood; of right and wrong; so that it will take thirty years to decide whether the field, left by my ancestors for six generations, belong to me or to a stranger three hundred miles off.” Judiciary of Bangladesh is caught in a vicious circle of delays and backlogs. Backlog of cases causes frustrating delay in the adjudicative process, which is eating away our judiciary. While delay in judicial process causes backlog, increasing backlog puts tremendous pressure on present cases and vice versa. This process goes on with no apparent remedy in view. Present rate of disposal of cases and backlog is alarming for justice, rule of law and economic development of the country. Our judicial and legal system has a rich tradition of common law culture and it can boast of a long record of good delivery of justice. Like any other legal system, common law with its adversarial or accusatorial features, has both its merits and demerits. But in recent years, certain objective and subjective factors have led our judiciary to a situation where its demerits are ruling over the merits, manifesting in crippling backlogs and delays. Delayed justice fails to pay even the winning party of the litigation, for its costs in terms of time, money, energy and human emotions are too high. Delay in our judiciary has reached a point where it has become a factor of injustice, a violator of human rights. Praying for justice, the parties become part of a long, protracted and torturing process, not knowing when it will end. Where it should take one to two years for the disposal of a civil suit, a case is dragged for 10 to 15 years, or even more. By the time judgment is pronounced the need for the judgment in certain cases is no more required. Moreover, in a society of class differentiation, the lengthy process, which is adversarial and confrontational in nature, puts the economically stronger party at an advantageous position. If the judiciary functions substantively and in accordance with the procedural laws, an existing wide scope for delays, can transform it into a system which becomes procedurally hostile towards marginalized sections of our people, defeating the goals of social justice.
Reason of delay:
The reasons for delays in our civil justice system are both systemic and subjective. They may be identified as follows: 1. Common law oriented adversarial or accusatorial character of the civil process as against inquisitorial as practiced in continental Europe, meaning that the litigation is party-controlled which provides wide maneuvering power to the lawyers, and presupposes lesser initiative and relative passivity of the judges. 2. Slow process of service of the summons which can be further slowed down by the intentions of the parties concerned,... | https://www.studymode.com/essays/Delay-In-Civil-Suit-50716547.html |
1. Memory is always a memory of something. Both the memory and the objects of memory are mutually dependent. Memory depends upon objects that are not considered memory. Therefore, memory cannot have its own separate nature And is empty. To say that memory is empty means that it lacks such inherent existence, that it is unable to be established In and of itself.
2. For memory to exist in and of itself, would be pointless. What would it even mean for memory to remember itself? Memory must be related to and dependent upon a present to be relevant and not irrelevant. If memory inherently existed, it would be locked away in an irretrievable past that could not be related to the present.
3. Memory does not involve an autonomous faculty or substance, but is an interrelated function without ever becoming an independent entity that stores experiences. Even neural activity is always observed in a present whose function is not contained in itself but related to vast regions of the brain. Memory is not separate from life events as if it stores a past, but is a continuous process of re-creation as an interrelational movement.
4. If memory was an independent and fixed storehouse of past experience, memories could not be presently remembered, new memories could not arise, and memory would therefore be irrelevant to what was occurring. In other words, memory could not function. Because memory does not exist separately but is related to everything else, the process of remembering can conventionally and functionally be said to occur.
5. Memory cannot contact a intrinsically separate past because there isn’t one. The past is none by definition. If the past and present were two inherently divided movements, or if they were inherently the same, nothing could conventionally change. Put differently, if the past and present were either fundamentally divided or identical, then the sequence in which phenomena dependently relate to each other, which is known as time, could not be recognized.
6. Past and present are interrelated phenomena. The past lives in the present and is not inherently separate from it. To speak of the past or present are relational, mere conventional designations. What is remembered is not removed from present conditions as a separate entity. Therefore, memory cannot ultimately look back upon or revive an original event. The past is always being revised. The revision of the past is called the present.
7. Being a relational movement, memory does not endure in itself. What appears as repetitive memories result from the regularities of conditions. Memories, are like flickers of a flame that change every instant, but that continue in dependence upon conditions as a kind of synthesis of disintegration and formation.
8. To ask what happens to memories that are forgotten is an unanswerable question because memory never existed in and of itself to begin with. For if you clear away all of the conditions that memory depends upon, there is no findable memory. Because memories do not create themselves, or continue as themselves, ultimately, they are neither born nor die.
10. Endless, related conditions allow for the function of memory that are not themselves considered memory. These include sensory perception, cognition, emotion, culture, neurons, nutrients, the cosmos, ad infinitum. These conditions are also empty of a separate, independent nature. Memory is also a condition for mental functioning. For “to know” is to remember.
11. Memory not only depends upon the relational functioning of a brain, but changes it in return. A brain is not a fixed entity either. Its function depends upon the distal property of oxygen for instance. What are considered to be intrinsic physical and mental entities are in the end, abstractions in a sea of interdependencies.
14. Memory is an interreflective movement rather than existing as an entity. And as memory is unable to establish its autonomy, so the object of memory cannot be established either. It is not a copy of what exists out there. What we call memory is dependent upon countless conditions and thus nonlocal and indefinable. Therefore, memory is only a conventional, nominal characterization.
Susan, I forget now where I read (or heard) this, but it was something to do with neuroscience– that every time a particular memory is “accessed” the mind is actually accessing the memory of the memory, which means each time it is accessed it is being modified. So a memory that is continually recalled is not a stable, unchanging mental film of an event. So there is no continuity in memory– it is almost as if we are playing the :”telephone game” with ourselves! We never quite remember anything like the “actual event” (more precisely, there never was an “actual event” to begin with, in any objective sense).
It reminds me of making copies of cassette tapes and making copies of those copies, and how with each copy, the sound became more distorted and hiss became more intrusive, masking the recording itself.
Yes Josh, neuroscience has come to this conclusion, kind of. It had been believed that memory is localized, stored like a picture within a certain area of the brain, however this was found to be untrue. Now neuroscience says that memory involves the entire brain including vision, emotion, cognition, etc., and not stored at all. Everything about so called memory is dynamic. The quantum physics model of the hologram was first based upon this finding. Nagarjuna reasoned that mind (which would include memory) could not be separated from objects of consciousness or objects of memory, which is why memory must be empty of an intrinsic nature.
So memory is but a word to describe an ultimately indescribable process. | https://emptinessteachings.com/2012/12/12/memory-is-not-a-time-traveler/ |
Library Director Emma Dressler said at the annual dinner to honour volunteers, “volunteers have always been an integral part of Fernie Heritage Library, since the library’s founding as a reading room in 1899 and in 1920, the formation of Fernie Public Library Association. Fernie remains a Public Library Association, one of only 17 remaining in the Province of B.C. I reviewed our files- hundreds of community members have volunteered their time to the Library.” This year the event was held June 6, at the Red Tree Lodge and considered a success by all who attended. Emma went on to say that there are three main groups of volunteers, trustees, shelvers and Friends of the Library. “Volunteers do many things, shelve books, repair, read shelves, phone patrons, tidy the library, direct patrons to staff, talk with patrons, bring the community into the library and the library into the community, help with community programs, deliver books to those unable to come to the library, assist with special projects, help with the garden, they are simply an amazing group of dedicated people. The library would not be what it is, would not be able to serve the community, without the volunteers.”
Emma thanked them individually saying something personal about each one. She thanked Mary Giuliano for organizing the event for so many years and providing the centrepieces that are donated to volunteers and for her support of the library. She thanked the board for “supporting the library, setting and achieving goals to make the library the welcoming place it is” and gave special thanks to the staff, “Every day I walk into work and say a thank you that I am fortunate to work with such a dedicated, bright group of women. Without you the library would not be the place it is”.
Chair of the Board Camilla Merritt introduced members of the Board saying, “Trustees are called this because we have been given the trust of the members of the library to make decisions on their behalf. As chair I have the privilege of speaking on behalf of all of our members, the community of Fernie and Areas A and B, so when I say thank you please understand that it is coming from every man, woman and child who used our library. This is our one opportunity to really take the time to thank you and show you our appreciation but I strongly suspect that not a day goes by when Emma and staff aren’t grateful for all the hard work that you do, without you the library could not provide the exceptional service that all of our members currently enjoy.” Camilla also thanked staff for working so well under pressure while treating “everyone that walks through the door with dignity, respect and with a warm welcome.” She thanked Emma for ability to orchestrate the many facets that make up the library in order to get the best results from everyone while remaining calm and cheerful. And thanked the trustees for each bringing professionalism and commitment to meetings. “Friends of the Library was set up to assist with fundraising for projects for the library and to oversee the garden. Over 30 families and individuals have signed up and the group is currently coordinating the Arts and Letters Gala with the Fernie Arts Station”, added Emma.
The following are volunteers: Cathy Barnett, Teagan Forster, Annette Harrison, Betty Johnson, Joan Johnson, Bob Johnson, Mary Martin, Cindy Pace, Meg Prentice, Stephanie Saumur, Elsie Singleton, Catherine Smith, and Lila Tomlinson.
Trustees are Angie Abdou, Sylvia Ayers, Charlotte Ezaki, Todd Fyfe, Stephen Gort, Judy Little, Adam MacDonald, Camilla Merritt, Anna Piney, Phil Iddon (City Rep)
Staff is Emma Dressler, Heather Gordon, Tina Hayes, Melissa LaFortune, Marilyn Razzo, Jeanette Sedgwick, Sandra Summerfield, Patti Ohm.
The Fernie Library was recently toured by MLA Bill Bennett, Jennifer Osmar, his constituency assistant and Elise Palmer, a legislative intern from Victoria, all were very impressed with the building and the many services offered. Founding members Louise Uphill and Anne Stelliga who volunteered for 44 and 41 years respectively would have never been able to imagine what is offered now when they began decades ago. They have passed on but their work is still carried on by dedicated volunteers and staff. This work is truly appreciated by everyone in the community because the library is more than just a place for books, it has become a welcoming place that allows people of all ages and walks of life to congregate, socialize and learn. | https://www.thefreepress.ca/community/the-library-honours-its-volunteers/ |
Throughout history, nature has continuously inspired humans to create better and new solutions to our problems. Among other things, it has inspired hunting strategies, agriculture, modern technology, design solutions, business models, and even structures in social organization and communication. In the knowledge-driven societies of today and considering the big global challenges we face, innovation based on biology is becoming even more important in our transition towards a sustainable bio-based society. Innovation requires that imagination is combined with knowledge and multiple competencies – and thus demands that people from different disciplines engage in open collaboration.
The course consists of two parts.
In Part 1 we look to nature for innovative
solutions, engaging knowledge on biological levels of increasing
complexity:
a) cells and organisms (shape and function)
b) populations and communities (interaction and functional diversity)
c) ecosystems (network properties and mechanisms).
These natural elements are all part of systems, which are highly optimized via natural selection.
We employ ecological and evolutionary aspects as a framework to understand and generalize from specific phenomena into general principles. New state-of-the-art innovation methods are introduced at each level to facilitate this process, giving students the opportunity to practice innovation on selected biological elements. During the course, inspirational talks from PLEN-based and external researchers will present case studies from real life, demonstrating the transmission from knowledge and ideas into products and patents.
In Part 2 we take a problem-based approach, using innovation inspired by nature to solve an existing problem or challenge in industry. This part of the course brings together teachers and students from KU-SCIENCE and DTU – representing different disciplines, interests, and educational backgrounds such as natural resources, biology, biotechnology, engineering and design.
Based on the input of companies, non-profit, or governmental organizations, students collaborate in multidisciplinary groups to design, build and develop innovative solutions to specific problems.
The course develops students’ ability to foster innovative solutions inspired by nature within a multidisciplinary context. They learn to manage innovation processes based on inspiration gained from the plethora of highly evolved biological functions, systems and processes found in nature. Students gain a basic set of theories and tools for solution- as well as problem-based innovation and design. They learn to create, select and transform ideas into a new prototype, concept, or process within a multidisciplinary context. These learning outcomes are framed in a clear perspective of commercialization and implementation strategies towards private, non-profit, or governmental organizations.
After completing the course, the student is expected to be able to:
Knowledge
- Understand biological and ecological elements as a source for innovation
- Provide an overview of concept and theory of innovation management, innovation process models, exploitation and exploration, and creation
- Describe different innovation models and methods
Skills
- Read and interpret specific articles and textbook chapters
- Generalize and categorize biological solutions according to a specific assignment/topic
- Find and explain the evolved solution of specific issues
- Adapt novel tools for innovative creation
- Distribute tasks and responsibilities in a multidisciplinary environment
- Communicate ideas clearly, concisely and confidently in writing and orally
Competencies
- Discuss, evaluate and decide among creative solutions to a given problem
- Make use of own and other persons competences in multidisciplinary work
- Manage a collaboration process in a multidisciplinary setting, uniting the competences and backgrounds present in the whole group
- Transfer biological knowledge into innovative solutions within a commercial context
Part 1 (course weeks 1-5): Lectures and e-learning modules
focusing on biological elements within different levels of
organization: cells, organisms, populations, communities and
ecosystems; Group work and written assignments, applying biomimicry
and innovation methods on those biological elements.
Part 2 (course weeks 6-8): Multi-disciplinary group work (uniting students from KU and DTU), taking a problem-based approach to real-world problems; Group work, having members from both KU and DTU, are guided by teachers with different biological-technical-entrepreneurial expertise to facilitate the innovation process and progress; During plenum sessions, students train their critical thinking and communication skills by giving and receiving feedback on their venture idea in the form of intermediary pitch talks and final product presentations.
Course material consists of handouts, selected scientific papers and book chapters. Students are expected to identify additional group-specific literature.
Further information will be availble on Absalon.
The course is multidisciplinary, and students will work in a
number of functions and draw on a diverse set of experience and
knowledge. Particular prerequisites are therefore not defined
Academic qualifications equivalent to a BSc degree is recommended.
Feedback during Part 1
- In-plenum feedback from teachers and peers to student's oral presentations
- Peer discussion and teacher supervision of group projects
- Written assessment of the individual written assignment
Feedback during Part 2
- Peer discussion and supervision of group projects
- In-plenum feedback from teachers and peers to students' oral group presentations
- ECTS
- 7,5 ECTS
- Type of assessment
-
Written assignmentOral examination, 20 min under invigilationWritten assignment based on Part 1 and oral presentation of group work (Part 2)
No preparation time before the oral examination.
The two parts count equally in the final assesment.
- Aid
- All aids allowed
- Marking scale
- 7-point grading scale
- Censorship form
- No external censorship
One internal examinator
Criteria for exam assessment
The assesment will be on based the learning outcomes
Single subject courses (day)
- Category
- Hours
- Lectures
- 30
- E-Learning
- 20
- Practical exercises
- 25
- Project work
- 50
- Guidance
- 6
- Colloquia
- 10
- Preparation
- 60
- Exam
- 5
- English
- 206
Kursusinformation
- Language
- English
- Course number
- LFKK10412U
- ECTS
- 7,5 ECTS
- Programme level
- Full Degree Master
- Duration
-
1 block
- Schedulegroup
-
A
- Capacity
- No limit
- Studyboard
- Study Board of Natural Resources, Environment and Animal Science
Contracting department
- Department of Plant and Environmental Sciences
Contracting faculty
- Faculty of Science
Course Coordinator
- Lars Pødenphant Kiær (3-7276714676726b7434717b346a71)
Teacher
Organisation of teaching:
The course will draw on the expertise from teachers affiliated to both contributing Universities for lectures, exercises and group facilitation.
Are you BA- or KA-student? | https://efteruddannelse.kurser.ku.dk/course/2019-2020/LFKK10412U |
Mental illness and the creative mind: The Brian Wilson Story
The Beach Boys leader’s story shows how mental illness and the creative mind meet and how proper treatment and support can help people with mental illness build on their passions and enhance quality of life.
Dr. Brian Levine
Senior Scientist, Rotman Research Institute
Dr. Brian Levine is a senior scientist at the Rotman Research Institute at Baycrest and a professor in the departments of Psychology and Medicine (Neurology) at the University of Toronto. Dr. Levine has his Ph.D. in clinical psychology and completed his postdoctoral fellowships in clinical and research neuropsychology.
Dr. Levine’s key areas of interest are inter-related:
Assessment of executive function. The role of executive function in humans is to coordinate the many brain activities needed to set goals, make plans to attain those goals, organize the steps to carry out those plans and ensure that the desired outcomes are achieved.
Episodic memory, which refers to the memory of events, times, places and associated emotions and other knowledge in relation to an experience.
Recovery and re-organization of brain function following traumatic brain injury.
Monica Matys
Health and Wellness Reporter, CTV News Toronto
Monica Matys, Health and Wellness Reporter, CTV News TorontoMonica has been the health reporter on CTV News Toronto since 2001. Her feature health report, Lifetime, can be seen on CTV Toronto News at Noon and at 6:00 p.m.
Monica’s interest in the medical field began while working for a medical publishing company as the editor of a monthly magazine. Before being hired on at CTV Toronto, she did some freelance health reporting for the National Post as well as Chatelaine Magazine.
One of Monica’s favourite days on the job was the time she got to spend a day in Credit Valley Hospital’s Emergency unit. It really opened my eyes to the strains on our system, Monica explains. Monica has also had the privilege of interviewing some of our country’s top medical experts, and has visited many of the top facilities.
Monica grew up in Scarborough, and has an honours degree in Journalism from Carleton University in Ottawa. Monica lives in the Toronto area with her husband and two children. | http://www.baycrest.org/educate/smartaging/speaker-series/mental-illness-and-the-creative-mind-the-brian-wilson-story/ |
One thing is clear: the future is freelance. By 2027, freelancers are expected to become the majority workforce in the U.S. This makes the present the perfect time to learn how to scale your freelance business, and the only way to do this is by hiring the right people.
Depending on your business model, you constantly need to attract new clients and promote yourself. This means taking extra time for emails, calls, negotiations, administrative tasks, etc.
As you grow, the time you put in for freelancing becomes scarce, so bringing new people on board is your best option. The unique challenge for freelance businesses is doing this while keeping the same quality of service and costs to a minimum.
The following article will show you how to start scaling your freelance business with the help of a remote workforce, why your remote collaborators play a crucial part in the process, and why the training process is just as important as the recruitment one.
Can a freelance business be successfully scaled, and under which conditions?
Wondering how to scale a freelance business? Any business can be scaled, provided you seize the right opportunities and make informed decisions. Hiring the best people for the job is only part of the process. This brings us to our most important questions:
- How will you hire?
- Can you provide a contract for people who reside outside your country?
- How about people from your country of residence?
Don’t panic. You don’t need to answer them right now, but there are a few things to clarify before you get started:
1. Determine the best form of collaboration
You can hire full-time, part-time, or on a project basis. Keep in mind that each country has different forms of legal organizations for freelancers, so it is best to do some research and look into the costs and obligations involved when hiring.
2. Set reasonable expectations and start small
Even if your business exploded overnight from that viral TikTok video, it doesn’t mean you need to hire 10+ people immediately. Start with one person, delegate, understand the responsibilities involved, and continue hiring when it makes sense. This brings us to our next point:
3. Take on a reasonable number of employees
A maximum of three extra people will take most of the workload off, so you can focus on better resource management and securing new projects. Consider how this will impact your business from a financial standpoint. Can you afford the extra costs? Is paying an extra hand worth the time you save in the process, and what would you use the extra time for?
4. Prepare for the nature of your work to change
You are not only working for yourself now, so you must also think about your team and their financial security. As you grow, you will probably spend more time marketing and finding clients while your team delivers results. You will also have more legal and fiscal obligations, which means spending less time doing what you used to.
5. Mind the scaling process
Scaling has a long line of benefits, and growth is one of them. Aside from benefits, responsibilities also play a significant part. You need to manage invoices, keep up with current legislation, pay salaries, etc. Everything that seems straightforward will get more complicated because as you scale your freelance business, you also scale the infrastructure that sustains it.
Pros of scaling
- More money
- More freedom
- More flexibility
- Less stress when you want to take time off
Cons of scaling
- More responsibility
- Your role will change
- Challenging
- Increasing competition
Strategies to scale your freelance business
The first and most important condition of scaling up your freelance business is finding people with the right skills, preferably skills that are complementary to yours. There are no “How to scale a freelance business” standard guidebooks, but there are strategies you can use to grow. Consider hiring a remote workforce if you find yourself in one of the following situations:
- You have high-demand skills but not enough time to grow or take on more projects;
- You own a small business selling handmade, natural, or limited edition products that are earning a steady income. Crafters and small businesses have been killing it on TikTok and other social platforms lately, and you could benefit from a pair of extra hands while you focus on your craft.
- You are a successful freelance artist, animator, or designer that wants to do more than sell prints or services (e.g. create courses, sell books, etc.)
- Your e-commerce businesses on eBay, Amazon, or other marketplaces are taking off, or you have a successful dropshipping business.
- You’re a content creator or streamer that commands a large audience and wants to grow on other platforms.
1. Zero in on your niche
If you put in the work and become an expert in a profitable niche, the sky’s the limit for where your freelancing business can go. People looking for specific products often turn to smaller stores or specialists where they can ask for expert advice, which works to your advantage.
Similarly, consumers are willing to pay insane amounts for limited edition or unique products. To offer high-value services, you must tailor them to match client needs. Focus on the services/products clients are most interested in and make sure the price reflects that.
2. Stop working so hard
A common mistake freelancers make is that they work to the point of burnout. Putting in longer hours might result in a larger payout at the end of the month, but what is the point if you have no energy to spend it?
Try to take a step back and objectively assess your business model.
- Do you really need to do everything yourself?
- Is there any way to automate or delegate your work?
- What projects brought you the best financial results with the least amount of headaches? Focus on those!
- Are you calculating your rates and prices correctly? There are several things to unpack here, so we recommend checking out this guide to determine your correct rate.
- If you are selling services: determine your real hourly rate with a timesheet calculator. Factor in your project costs (taxes, hardware, rent, bills, etc.) and add at least a 20-30% profit on top of that. A small price increase instills a sense of exclusivity, which, depending on your niche, will draw your target audience in.
- If you are selling products: keep in mind that pricing is not something you only do once. There are multiple strategies for pricing your product, but the simplest ones are adding up variable costs per product or pricing at a profit margin.
3. Set clear sales goals
Once you’ve decided on a niche and optimized your services, you need to set goals for yourself and your future employees. Both financial and sales goals are important when scaling your freelance business, so you need to answer the following questions before you take anyone else on board:
- How many clients can you accommodate at once?
- Is fast-paced growth sustainable for your freelance business?
- How much additional support will you need to handle the extra workload?
- Can you support a full-time employee even during slower months?
4. Delegate simple tasks and expand your skill set
People often find it hard to let go of tasks they could easily delegate. This becomes overwhelming for them and frustrating for their team members. People who are interested in how to scale a freelance business often lose sight of smaller things that can be done to get there, especially when these concern themselves.
Find that unique combination of skills that sets you apart from the competition and become more profitable by hiring and mentoring new people. As a copywriter, for example, it is fairly easy to hire more staff and stick to checking their work rather than create the copy yourself.
This will free up more time to develop additional skills and services your target audience is searching for. Be conscious of your decision to improve and follow courses to expand your skill set.
How to recruit your remote workforce
As a freelance business owner, you have the added advantage of being able to coordinate your team remotely. There’s no need to bring people into the office; after the training period, the same job can be performed just as thoroughly from their home office.
1. Create an unbeatable value proposition
EVP (Employee Value Propositions) consists of innovative benefits you offer when you hire someone. Include perks your collaborators are looking for rather than the norm (healthcare packages, private pensions, etc.).
The modern worker is more interested in flexible working hours, the chance to travel ‘round the clock and still do their job, seeing their kids grow up, etc. These are precisely the type of perks that you, as a freelance business owner, can provide.
Include recognition for the added value they bring to your business. Happy and productive employees focus more on the whole package instead of settling for an attractive salary and nothing more.
If you want to draw and keep employees in the long run, here are a few out-of-the-box benefits that might make you more attractive than a big company:
- A healthy remote work environment (online activities, compensation for remote work costs, result recognition);
- Flexible working hours;
- Mental health days;
- Incentives for meeting personal health goals;
- PTO on birthdays;
- Online training programs;
- Free access to expensive software and tools.
2. Create the perfect playground for fast growth
Freelance businesses offer more freedom to potential employees. The fact that you, as a freelancer, get to set your own rules is one of the first things that draw new talent in. Inexperienced candidates are offered a valid chance to showcase their talent and expand their potential while your business benefits from an infusion of innovation and creativity.
Freelance businesses are also considered a hub for career acceleration, as people who choose this type of job are more motivated to advance because they are given more space to innovate. You will learn how to scale your freelance business once you choose the right talent for your business profile.
3. Find experts on freelancing platforms
Exploring freelance platforms is the easiest way to find new talent while keeping costs minimum. These platforms cover top talent around the globe, but they also showcase niche experts that are hard to come by. The advantage of working with this type of platform is reduced risk. You can hire freelancers by the hour on a fixed contract where you set the duration.
You can find candidates for special projects, a rare opportunity that major recruiting platforms don’t offer. A major perk is the screening most platforms do before allowing candidates to be featured; this saves a lot of time from your recruitment process.
4. Post a killer JD
When a candidate decides if a job is a right fit for them, the first thing they look at is your job description. This is how you find people who are not only qualified for the job but share similar interests and values with your existing staff.
This is what you should include in your detailed job description:
- Clear, simple job title;
- What role the future employee will have in your organization;
- Main duties and responsibilities for the job;
- Required skillset (education, qualifications, courses);
- Remote working conditions outline.
To make things more appealing, it’s best that you go for a quirky, memorable description and set yourself apart from major corporations. Make your JD stand out, and it will appeal to candidates who appreciate out-of-the-box thinking.
5. Reach out to the right candidates directly
Take advantage of how interconnected freelancers are by reaching out directly via social media, freelancing platforms, or even their website. Many specialists create online portfolios to showcase their skills.
This innovative take on direct job applications will make your profile stand out and attract the right kind of talent. You can also target candidates who are actively looking for similar job profiles on platforms such as LinkedIn.
The follow-up process is just as important as the initial contact; you can easily send automated texts to candidates, which you can personalize according to their profile.
6. Hire people with no experience
Young people who are just starting out their careers are easier to train and more eager to put into practice what they learned in college or during the courses they took. The fact that they’re inexperienced doesn’t mean they can’t do their job well or exceed expectations.
Including them in your team would boost creativity and innovation, giving you a chance to connect with a younger audience. It would also offer insight into what is trending and how you could tweak your product to reach a new group of people.
How to effectively train remote employees for your freelance business
Once the employees are on board, you need to ensure they align with your vision and stay motivated. The dynamics of your freelance business are completely different from that of a corporation, so keeping everyone engaged and productive becomes a bit more complicated.
This is how you keep your remote staff focused and provide the right materials they need to get the job done on time:
Micro-dose information
While training your workforce, keep in mind that providing all project requirements at once and expecting them to take it all in won’t work. Start small and offer more digestible information.
This strategy is known as microlearning. Small-sized bites of information are more helpful for employees, especially when they have to manage multiple tasks simultaneously, and the training period is packed with those.
Become a mentor
A mentor is more than a trainer. They are the people employees turn to for guidance when it comes to career advancement and skill perfection. In order to become one, aside from knowledge in a specific field, you will need to perfect your counseling skills, offer constructive criticism, and practice empathy on a daily basis.
Create workflows and checklists
Streamline time-consuming tasks by keeping track of employee progress with dedicated workflows that are easy to implement. Checklists ensure you stay on top of everyone’s progress and that employees don’t skip important steps in their training schedule.
Provide time & frameworks
Give your colleagues a set of clear rules to go by and the necessary time to figure out how they fit into your business. This will create more cohesion in your newly-formed team and inspire people to grow at their own pace.
How to successfully manage & support your remote workforce
Track progress daily
First, you need to set clear expectations. Discuss online hours, set goals, and establish how meetings will be held and whose presence is compulsory. You can calibrate things along the way as you understand your team’s needs. Use tracking programs or productivity apps to monitor the activities of team members.
It is also equally important to avoid micromanaging; freelance business owners tend to take on too much responsibility in the hopes that this will help their team get things done more effectively. The only thing this accomplishes is confusing your colleagues and making them feel like they need an extra push to finish what they started.
Encourage personal and professional development
Remote mentoring programs have become a rising trend in the post-pandemic landscape. This is how managers or senior employees provide valuable advice and guidance to juniors, engaging them in conversations that have the potential to boost their careers and upgrade their skill sets.
Virtual mentoring relationships don’t exclude periodical face-to-face interactions: to bring team members closer together, you can organize interactive workshops where developmental growth is a top priority.
Focus on results, not screen time
According to a post-pandemic survey, most remote employees spend 13 hours a day staring at their screens. Despite that, they provide better results than their fellow office workers in less time. This means they are more productive working from home, have fewer distractions, take fewer breaks, and can begin work earlier since they don’t spend time commuting.
Create a clear remote work culture
Remote work culture is all about feeling connected to your fellow remote co-workers and being a part of a community, even in the absence of face-to-face daily interactions. You can easily create this type of community for your employees and strengthen both team bonds and communication using the following methods:
- Peer-to-peer recognition hubs and platforms (Kazoo, Awardco, Bonusly, Assembly, etc.)
- Virtual team building platforms (like Quizbreaker, etc.)
- Virtual game shows, happy hour, and conference providers like Go Game
- Actual team-building activities (escape room, improv class, karaoke, trivia, volunteering, intramural activities, etc.)
Optimize communication across every channel
Email is not enough when dealing with a team of remote workers. Yes, important information should be sent and archived via email, preferably in a way that grants access to other teammates. When learning how to scale a freelance business, it is important to stay in touch with your staff via email, SMS, video calls, messaging platforms, phone calls, and even social media. Delivering communication on time will:
- Increase productivity;
- Boost feedback and group discussions;
- Increase team interaction;
- Foster employee relationships;
- Ensure data consistency.
Ready to scale your freelance business?
Scaling your freelance business can be easily done when you hire the right talent. Remote workers are often more flexible and easier to train than office-based ones because they find innovative ways to acquire knowledge and don’t rely on micromanaging. Once you’ve taken on the right talent, you can focus on networking and starting new projects that will fuel growth.
The post Scaling your Freelance Business: How to Hire and Train a Remote Workforce appeared first on Millo.co. | https://tasklog.app/stories/scaling-your-freelance-business-how-to-hire-and-train-a-remote-workforce/ |
The present paper introduces a relaxation procedure based upon muscle stretching exercises. Traditional progressive relaxation training starts from muscle tensing exercises to teach voluntary control of muscle tension, but the literature shows widely varying results. An alternative method of relaxation training starts from muscle stretching exercises. Muscle stretching provides sensation contrasts for learning relaxation in addition to fostering relaxation through the stretching of muscles. The present report documents the rationale for the procedure and presents data from a clinical case study, including six months' follow-up, in support of its efficacy.
Document Type
Article
Publication Date
6-1987
Digital Object Identifier (DOI)
http://dx.doi.org/10.1016/0005-7916(87)90025-5
Repository Citation
Carlson, Charles R.; Ventrella, Mark A.; and Sturgis, Ellie T., "Relaxation Training Through Muscle Stretching Procedures: A Pilot Case" (1987). CRVAW Faculty Journal Articles. 195. | https://uknowledge.uky.edu/crvaw_facpub/195/ |
Many estuaries and coastal regions of the United States contain a legacy of contamination from past industrial, agricultural and other activities. Chemicals including metals, organic compounds, and pesticides were discharged into some waterways for decades, resulting in a buildup of contamination in the sediments of numerous harbors and bays lining the country's coast. The ecological effects of this contamination have been devastating in some cases, resulting in a loss of aquatic life and biodiversity in areas of high contamination. As efforts continue to clean up and restore these polluted aquatic ecosystems, there is a need for tools and methods to monitor their ecological health.
Among the most useful monitors of ecological health are the biological changes produced by environmental contaminants, whether these changes occur at a biochemical, cellular, community or population level. For example, some of the proteins and enzymes that are induced in organisms upon exposure to contaminants are sensitive indicators of chemical stress. Structural alterations in the DNA of organisms are serving as biomonitors for genotoxic contaminants. While these and many other approaches are in use for ecological health assessment, there is a continuing need for alternative approaches. Moreover, in assessing the effects of aquatic pollution at the ecosystem level, it is important to have a combination of biomarkers available.
Researchers at Harvard University are exploring the use of microbial diversity as a biomarker of ecological health. The biomarker method they are developing is based on characterizing the genetic diversity between microbial communities living in sediments with varying levels of pollution. This novel approach uses molecular techniques to evaluate the relation of groups of bacteria to each other and their environment.
Bacteria tend to live in complex communities, often consisting of many different species with an overall community structure suited to survival in the local environment. Part of what makes microbial communities promising as biomarkers is their rapid adaptation to environmental changes, including chemical stress. This adaptability results in selection of organisms capable of withstanding the pressures of the environment. Changes in microbial community structure can be monitored by looking at the genetic profile or diversity of the microbes in a particular environment.
The Harvard researchers recently used a molecular technique known as 16S rRNA restriction fragment length polymorphism analysis (RFLP) to measure the genetic diversity of microbial communities living in and around New Bedford Harbor, a highly contaminated coastal marine environment in southeastern Massachusetts. Designated a Superfund site in the 1980s, the New Bedford Harbor area offers a unique opportunity to study the changing structure of microbial communities in response to pollution. There are clear gradients of PCBs and metals in the sediments, from high concentrations in the Acushnet River Estuary that feeds into the harbor, to background levels in Buzzards Bay.
Part of the cell's protein making machinery, 16S rRNA is a molecule that allows for ready determination of the relatedness of microbes. In particular, rRNA genes are well suited for diagnostic purposes because they have conserved, variable, and highly variable regions that make identification of all members of a microbial community possible, including non-culturable organisms. The sequence and number of bases in the variable regions between conserved domains of rRNA genes result in different sized fragments during RFLP analysis that provide a picture of the genetic diversity in a microbial community.
The development of this biomarker method involved extracting DNA from sediment samples collected along the gradient of decreasing contamination in New Bedford Harbor. After purifying the microbial DNA, the 16S rRNA genes were amplified by a polymerase chain reaction (PCR) method and subsequently analyzed by a 16S rRNA RFLP technique. The data set generated from the RFLP analysis was fed into a computer program that determined the bacterial diversity of each site where sediment had been collected.
Results showed that bacterial genetic diversity was consistently greater in the highly contaminated New Bedford Harbor than in Buzzards Bay where contamination was only slightly above background levels. In addition, bacterial diversity was greater in the winter than in the summer in both the harbor and the bay.
The Harvard researchers are now beginning to examine microbial isolates from New Bedford Harbor for expression of genes responsible for resistance to specific contaminants.
Few studies have addressed the issue of changing patterns of microbial communities in polluted aquatic environments. This study showed that changes in specific contaminant concentrations were correlated with changes in bacterial diversity. In addition to providing a better understanding of microbial community responses to environmental stress, this research is significant for demonstrating the potential of using genetic markers in bacteria as a tool for monitoring the ecological health of polluted environments.
For More Information Contact:
Timothy Ford
Harvard School of Public Health
665 Huntington Avenue
Environmental Science and Engineering, Room 1-G17
Boston, Massachusetts 02115
Phone: 617-432-3434
Email: [email protected]
To learn more about this research, please refer to the following sources:
- Sorci J, Paulauskis JD, Ford T. 1999. 16S rRNA Restriction fragment length polymorphism analysis of bacterial diversity as a biomarker of ecological health in polluted sediments from New Bedford Harbor, Massachusetts, USA. Mar Pollut Bull 38(8):663-675. doi:10.1016/S0025-326X(98)90199-0
- Ford T, Sorci J, Ika R, Shine JP. 1998. Interactions between metals and microbial communities in New Bedford Harbor, Massachusetts. Environ Health Perspect 106:1033-1039. PMID:9703489
To receive monthly mailings of the Research Briefs, send your email address to [email protected]. | https://tools.niehs.nih.gov/srp/researchbriefs/view.cfm?Brief_ID=64 |
DEFINITION:
Under Detention Sergeant’s supervision, plans, organizes, supervises and participates in the preparation and service of food in a correctional facility; and performs other work as assigned.
DISTINGUISHING CHARACTERISTICS:
Corrections Cook is responsible for the all phases of food preparation for both the adult and juvenile detainees located in adjacent facilities. The incumbent is accountable for maintaining strict security and oversight of inmate workers assigned to work in the food service section of the facility.
EXAMPLES OF DUTIES:
The duties listed below are examples of the work typically performed by employees in this class. An employee may not be assigned all duties listed and may be assigned duties which are not listed below. Marginal duties (shown in italics) are those, which are least likely to be essential functions for any single position in this class.
- Prepares menus, estimates quantity of food required and requisitions food and related supplies; maintains and check food and supply inventory; contacts vendors on a regular basis to place orders for contracted food and supplies; receives and verifies supplies with purchase order.
- Estimates food consumption and requirements to determine type and quantity of food to be prepared; determines the food preparation time to assure meals are ready on schedule.
- Oversees and participates in large scale cooking of regular and special diet foods according to prescribed menus and recipes; operates standard kitchen equipment including ovens, steamers, slicers and mixers; sets up and serves food, prepares take-out food and stores leftover food; inspects food handling and storage procedures to insure compliance with sanitation standards.
- Trains and oversees inmate workers and juvenile detainees assigned to work in food service; inspects assigned work crew to assure personal hygiene and proper attire; counsels assigned work crew in the procedures of the work area; assigns work and instructs on the correct way to prepare the food or perform the task; assures the safe and proper usage of kitchen equipment; maintains security, order and discipline in the work area.
- Assists in the clean up of the kitchen and dining area; inventories knives, cleavers, and other utensils used in the kitchen; checks kitchen and dining area for contraband and weapons.
- Schedules, assigns, supervises and evaluates the work of the assigned work crew.
- Supervises the maintenance of proper sanitary and safety conditions in both the kitchen and dining areas; schedules equipment maintenance; keeps records and prepares reports, as required.
QUALIFICATIONS FOR EMPLOYMENT:
Knowledge and Ability:
Knowledge of Federal, State and local regulations pertaining to institutional food service; the use and care of materials and equipment used in large group food preparation; sanitary techniques and regulations; modern food preparation practices, procedures and equipment; large scale menu planning; basic mathematics; nutritional standards applicable to institutional food preparation; kitchen safety and hygiene; food service supervision; functional operation of a correctional facility including security and custody procedures and practices.
Ability to read, comprehend and follow rules, regulations and operating procedures; interpret recipes; prepare meals with minimum waste; coordinate the work of others engaged in food preparation; maintain surveillance over the activities of inmates assigned to food service work; operate food preparation equipment; keep records; plan and prepare large scale menus; communicate effectively both orally and in writing; write reports.
Special Requirements:
Possession of a Nevada driver’s license. Submit to fingerprinting and possess ability to pass a criminal history background investigation and drug screen prior to appointment.
Experience and Training:
Any combination of training, education and experience that would provide the required knowledge and abilities. A typical way to gain the required knowledge and ability is:
Two years of experience in large-scale institutional food preparation and volume cooking including at least six months in a lead or supervisory capacity is preferred.
PHYSICAL DEMANDS:
Strength and stamina to stand for long periods of time. Manual dexterity to coordinate movement of both hands to operate a variety of equipment; operate controls on automatic and manual mechanical devices and handle small objects. Ability to perform repetitive motion including bending, stooping, stretching, kneeling and crouching or crawling. Visual acuity sufficient to see details and read printed materials in a variety of lighting conditions including bright light and low light. Strength to lift and carry tools and equipment weighing up to 50 pounds.
WORKING CONDITIONS:
Generally clean working conditions with limited exposure to conditions such as dust, fumes, foul odors or excessive or extreme noise. Situations may include stress of working with inmate and detainees, emotional individuals and resistive and combative personalities; possible exposure to individuals with communicable diseases. Working in a locked down secure environment.
FLSA STATUS:
Non-Exempt. | http://www.hcsonv.com/corrections-cook/ |
Are you a big Caprese salad fan? These appetizers may become your next snacking addiction. Served on a skewer, this salad is easy to consume and there are some nice sweet and tangy additions to this Caprese appetizer with the prosciutto and peach. Go for a balsamic glaze instead of balsamic vinegar as this definitely enriches the salad.
Preparation
- Wash the cherry tomatoes and peaches, cut the peaches into slices.
- Using skewer sticks, skewer the ingredients alternating them as desired.
- Serve with a balsamic glaze and basil leaves, adding salt and pepper to taste. | https://www.lovemysalad.com/recipes/caprese-appetizer |
The 39 year old Indian association football manager and former player, and the Aizwal FC coach thinks that they will face a tough competition against Chennai City FC in their next match.
The Mizoram based side will face the Chennai City FC in the fifth round of I LEAGUE and so far in the competition they have collected 10 points in their four matches.
Right now they are sitting in the third spot on the table and their opponents are sitting in the ninth place on the table.
But Khalid Jamil is not taking them lightly as their courageous performance against Mohun Bagan.
He said: "Talking about this game, we are playing a solid team under Robin (Charles Raja) sir. I saw them play last game against Mohun Bagan. They played well. We’re not taking this game lightly at all. We will play our normal game tomorrow,"
"I’ve seen all Chennai City matches that were telecast. They played good football. They are strong defensively with some experienced players. They have good foreign players and they have mature senior Indian players. It will not be an easy game,"
The Aizwal FC coach believes that the debutante team has got quality players between them and they have got experienced Indian players and the coach Robin Charles Raja, also got some experienced and for him managing at this level does not matter.
He Elucidated: "They have Debabrata (Roy), (Dhanpal) Ganesh, Denson (Devadas) etc – A lot of experience. It doesn’t matter that the coach is in his first season in the I-League. We will have a tough game.
For that match maybe the Chennai City FC coach Robin Charles Raja would not get his captain, he is in doubt for the game.
But the Chennai City FC coach is hopeful that his boys can do well against the Aizwal FC.
The Chennai City FC coach added: "The boys are gelling well after playing together. We played a good game last time. We are worried about the results. Hopefully, we get the right result on Saturday,"
"We will definitely do well. But goals are a problem and we need them. But we are getting better with each game," | https://sportslibro.com/football/soccer-news/khalid-jamil-watchful-of-experienced-chennai-city-fc/16222 |
The invention relates to a method for determining dynamic behaviour of a motion control system, a method for determining characteristic numbers of a motion control system and a test arrangement for automatic operation of a real model of a motion control system
Motion control systems typically comprise a larger number of controlled motors, drives or generators. Examples can be found in the robotic area, in industrial chemical plants or in a roller or steel mill or such. Robots for example might have 5, 6 or 7 degrees of freedom in movement, wherein each degree of freedom in movement typically requires an own drive. During the design and planning phase for such industrial installations it is desirable to predict the dynamic control behaviour of the motors or drives under certain circumstances which can vary within a wide band in such installations. Prediction of the dynamic control behaviour enables an optimal design of the whole system in advance.
Complex motion control systems can exhibit strong nonlinear behavior which cannot be modeled and analyzed easily with a mathematical model. Examples are nonlinear friction effects, motor/drive saturation and quantization effects caused by sensors. These effects can deteriorate but also improve motion behavior. For example, presence of friction can enable higher feedback gain which results in faster step responses. On the other hand, large quantization effects may lead to limit cycles. Also the presence of a controller might cause strong non-linear effects.
Disadvantageously within the state of the art is that analysis of such systems requires detailed modeling of the physical effects which can be very laborious and time consuming. Further it is not always clear what effects can be neglected and what effects have significant influence. Solving the equations which try to describe such a system can also be difficult since solver algorithms might get problems at discontinuities.
Based on this state of the art it is the objective of the invention to provide a method respectively means which enable a simplified but accurate determination of the dynamic behaviour of a motion control system.
● build up a real model of the motion control system with at least a real motion device and a real controller,
● determining characteristic numbers of the model of the motion control system by use of a formal method of similitude or dimensional analysis,
● selecting a motion control system with belonging controller that's dynamic behaviour has to be determined and that's characteristic numbers are the same,
● applying a respective value set of operational parameters to the model of the motion control system and operate it therewith,
● measuring at least one respective physical system parameter describing at least in part the dynamic behaviour of the model of the motion control system,
● applying a conversion on the at least one value of the at least one measured physical system parameter by use of the characteristic numbers so that the at least one converted value indicates the corresponding dynamic behaviour of the motion control system.
This problem is solved by a method for determining dynamic behaviour of a motion control system with at least a motion device and a belonging controller. This is characterized by the following steps:
Basic idea of the invention consists in building up a downscaled real - not mathematical - model of a motion control system which is similar to the original system and which is easily to handle. Measurements within this real motion control system incorporate the full real world complexity including non-linear effects since it is based on real hardware. A basically similar proceeding is used successfully for example for analysis of fluid dynamics.
eigenvalues
The model of the motion control system has to have the same characteristic numbers than the actually motion control system that's behaviour shall be determined. Characteristic numbers are comparable with of a matrix for example and enable the transfer of physical parameters inbetween two different scaled motion control systems with the same characteristic numbers.
Examples for such characteristic numbers are Reynolds number, PrandtI or Nusselt-Number and allow transferring results between different systems. The proceeding how to determine such characteristic numbers is known from formal methods of similitude theory or dimensional analysis for example which are not explained in detail here. As long as the characteristic numbers are the same, the different systems will behave similarly even though they might exhibit different. In this case, measurements or simulations from one system can easily be transferred to other systems in order to allow investigations of systems, that can't be modeled and simulated accurately enough.
Using such a method for determining the behaviour of a motion control system provides in an advantageous way on one side high accuracy with - on the other side - avoiding building complex simulation models for all non-linear effects.
● build up a real model of a motion control system with at least a real motion device and a real controller,
● determining physical operation parameters for the model of the motion control system,
● determining a number of value sets for the operational parameters,
● determining a desired control behavior of the controller,
● successively applying the respective value sets of operational parameters to the model of the motion control system and operate it therewith,
● determining the control behaviour of the real controller during operation and compare it with the desired control behaviour,
● in case of the difference is exceeding a given limit then variation of the controller parameters and applying the same value set of operational parameters again until the real controller behaviour corresponds to the desired control behaviour within the given limit,
● measuring and storing values of respective physical system parameters together with the belonging set of values of currently applied operational parameters into a measurement database,
● proceeding with the next value set of operational parameters,
● determining the characteristic numbers of the model of the motion control system based on the data in the measurement database by use of a formal method of similitude theory or dimensional analysis.
The problem of the invention is also solved by a method for determining characteristic numbers of a motion control system with at least a motion device and a belonging controller. This is characterized by the following steps:
Objective of this aspect of the invention consists in providing an improved method of finding characteristic numbers for a motion control system. A controller is a component, which might cause strong non-linear effects. Furthermore the dynamic behaviour of a controller depends on several parameters which basically have to be considered for finding the characteristic numbers of a whole motion control system. Thus especially the presence of a controller makes it difficult to find characteristic numbers.
As known from the methods of similitude theory or of dimensional analysis independent operational parameters have to be determined if characteristic numbers of a system have to be found. The motion control system starts operation in case of applying values of those operational parameters thereto. In an easy case this could be a three phase voltage of a certain value and frequency which is applied to a motor. Anyhow in case of a system with a controller also the controller parameters are subject to be potential influence parameters.
To reduce the number of potential influence parameters it is foreseen to define certain desired dynamic controller behaviour which is assumed to be in an "optimal" range. This could be for example an overshoot during the control process in the range of at maximum 5% or 10% respectively a damping D of roughly 1/sqrt(2). Hence the effective parameters of the controller, which might be a P I D controller, have in principal not to be taken into consideration as potential influence parameters. Moreover it is assumed that effective parameters are determinable for each comparable controller which makes him working according to the desired control behaviour.
Thus a measurement database has to be provided, where on side the independent influence parameters are varied in a systematic way but where on the other side only those measurements affiliated with desired controller behaviour are included. By determining a number of value sets for systematic variation of the independent operational parameters the range of relevant possible combinations of the operational parameters is covered.
Those value sets are applied in a sequence to the model of a motion control system so that it gets step-wise operated with the respective parameter values. The control behaviour of the real controller during operation is determined and compared with the desired control behaviour. In case of a difference of the determined controller behaviour to the desired controller behaviour the effective controller parameters are varied and the model of a motion control system is operated again with the same value set of independent operational parameters until the determined controller behaviour corresponds to the desired controller behaviour. Of course there has to be foreseen a certain tolerance band for determining identity of controller behaviour, for example an acceptable deviation of +/- 5% of a reference value.
If effective controller parameters have been found which lead to the desired controller behaviour the respective measured values of physical system parameters are stored together with the belonging set of values of currently applied operational parameters into a measurement database.
Finally a measurement database respectively a catalogue of systematic varied measurements is provided. All measurements within this catalogue are affiliated with the same desired controller behaviour. This catalogue enables determining the characteristic numbers of the model of the motion control system by use of a formal method of similitude theory or dimensional analysis for example. Thus an improved way for finding characteristic numbers of a motion control system is provided, which advantageously reduces the number of independent operational parameters.
Of course it is also useful to determine more than one desired controller behaviour, for example a first one with an overshoot of 5% and a second one with an overshoot of 10%. In this case a belonging measurement catalogue would be generated for each of the desired controller behaviour. In an advantageous way this gives a further possibility of finding characteristic numbers of the model of a motion control system in case that there is no success for one of the desired control behaviour of the controller.
In preferred form of the methods according to the invention the at least one motion device is an electrical engine or generator. Such electrical devices are very common on one side and their electrical operational parameters are describable in a good way. Furthermore their behaviour is rather good transferable inbetween two similar devices of different size. Hence a motion system with an electrical drive is very suitable to be characterized with characteristic numbers.
In a further variant of the methods according to the invention the characteristic numbers of the model of the motion control system are determined by use of the PI - Theorem method. It has been found, that this method is suitable in a particular way for finding characteristic numbers of a motion control system.
eigenfrequency
damping.
In a further variant of the invention the characteristic numbers comprise the parameters and Those parameters are very suitable to describe major characteristics of system with rotating parts such a motor, engine or drive.
● a real model of a motion control system with at least a real motion device and a real controller which is foreseen to be operated by applying values of a set of physical operational parameters thereon,
● a computing unit which is foreseen to initialize and supervise operation of the real model of a motion control system,
● wherein means are foreseen to determine the control behaviour of the real controller during operation and compare it with a desired control behaviour,
● and wherein the test arrangement is foreseen - in case of a deviation of the behaviour is exceeding a given limit - for automatic variation of the parameters of the real controller and to trigger operation of the real model of a motion control system again by applying values of the same set of physical operational parameters thereon until the real controller behaviour corresponds to the desired control behaviour within the given limit,
The problem of the invention is also solved by a test arrangement for automatic operation of a real model of a motion control system comprising
As described above the determination of characteristic numbers of a model of a motion control system is simplified in an advantageous way by providing a catalogue with measurement data affiliated to certain controller behaviour.
Thus this aspect of the invention is directed to a test arrangement which is foreseen to automatically perform several measurement trials with different value sets of operational parameters preferably in a loop so that such a catalogue is generated with a minimum of manual effort. Core component of the test arrangement is on one side a real model of a motion control system and on the other side a computing unit which preferably initiates the different sequences of a loop test. The adaptation of the effective controller parameters is preferably also done by use of the computing unit, which comprises a belonging interface in this case. The computing unit as such might be an industrial PC for example which is foreseen to control the automatic overall process of a loop test, to determine a possible deviation of a control behaviour of the controller compared to the desired controller behaviour and to adapt the parameters of the controller accordingly.
To get values of different operational parameters applied to the motion control system preferably an electrical frequency inverter / drive s foreseen. In case of an electrical motor or engine, this can be a power-electronic based converter, which is for example suitable for generating a three-phase voltage or current with a - preferably variable - frequency.
Thus in a variant of the invention the test arrangement is foreseen for the automatic variation of the effective controller parameters. This can be as well by trial and error or by a systematic approximation to those controller parameters which cause the desired controller behaviour. In a further variant of the invention the test arrangement is also foreseen for a loop test. Preferably the computing unit comprises storage media for storing the measured results into a database.
As for the methods described above the at least one motion device is preferably an electrical engine or generator. The electrical operational parameters of those devices are describable in a good way and their behaviour is rather good transferable inbetween two similar devices of different size. Hence a motion system with an electrical drive is very suitable to be characterized with characteristic numbers.
Further advantageous embodiments of the invention are mentioned in the dependent claims.
Figure 1
Figure 2
Figure 3
shows an exemplary test arrangement for automatic operation of a real model of a motion control system,
shows an exemplary table of analogy inbetween model of motion control system and motion control system and
shows examples for dynamic controller behaviour.
The invention will now be further explained by means of an exemplary embodiment and with reference to the accompanying drawings, in which:
Fig. 1
REF
REF
shows an exemplary test arrangement 10 for automatic operation of a real model of a motion control system. A motion device 12, for example an electrical drive for a not shown paper mill, is characterized by frame data for example such as a rated reference power P, a reference torque Θ, its damping or its friction. The motion device 12 is electrically connected and driven by a booster 26, in this case a power electronic based three phase converter. The booster 26 is connected with a controller 24 by use of bi-directional communication lines 28. This enables the controller 24 to transmit desired reference values to the booster 26, for example a desired voltage and a desired frequency. On the other hand the booster 26 is also foreseen to supply internal control values back to the controller to provide feedback data.
1,2,3
1,2,3
The controller also gets feedback data from an exemplary measurement device 16 for the actual angle of revolution respectively the angular frequency ω(t) and/or the actual torque M(t). Furthermore the controller 24 gets feedback from an exemplary measurement device 20 for electrical power p(t), whereas this parameter is basically derived from voltages U(t) and currents I(t) of the three electrical phases. All those feedback values are provided over data lines 22 from the measurement devices 16, 18, 20 to the controller 24 and are subject to influence the control behaviour of the controller 24. Of course also other suitable system parameters might be determined by respective measurement devices and provided to the controller 24.
The controller 24 is in addition connected to a computing unit 32 by use of bidirectional communication lines 30. This enables the computing unit 32 on one side to start operation of the motion control system - comprising the motion device 12, the controller 24 and the booster 26 - by applying respective values of operational parameters to the controller 24. Thus it is possible to initiate different measurement trials, which are coordinated by the computing unit 32. A belonging executable control software program is allocated within the storage of the computing unit 32.
"system is ready", "system is running", "system has
fault"
The computing unit 32 is foreseen to receive feedback data from the controller 24. This data are on one side values which are required to operate the motion control system in a loop, for example a or such. On the other hand the computing unit is also foreseen for receiving feedback data, either at least some of the directly determined feedback values or secondary values derived therefrom. But also values describing the dynamic behaviour of the controller 24 during operation are provided to the computing unit 32 directly or indirectly, for example the height of an overshoot or such. Thus it is possible to compare the dynamic behaviour of the controller 24 with a desired behaviour, which is preferably provided within the storage media of the computing unit 32. A user interface 36 is foreseen, which enables a user for example to enter a desired dynamic behaviour of the controller 24 into the computing system 32.
Furthermore the computing unit 32 is also foreseen to modify the effective control parameters of the controller 24 and make them effective so that they can be varied in case that the determined controller behaviour does not correspond to the desired controller behaviour. If the respective controller behaviour finally corresponds to the desired one the belonging system and operational parameters are stored into a measurement database 34, which is preferably allocated within storing means of the computing unit 32.
Fig. 2
shows an exemplary table of analogy 40 inbetween a model 60 of a motion control system and a motion control system 62. Both systems 60, 62 are similar and have comparable characteristics but differ for example in their size, for instance by a factor of 20 ... 1000 concerning the respective rated power. Thus the model 60 of a motion control system comprises in principal the same components, namely a motion device 48, a booster 46, and a controller 44, than the motion control system 62 with its basic components motion device 58, booster 56 and controller 54.
eigenvalues
The motion devices 48, 58 differ as well in their size than the boosters 46, 56 whereas the controllers 44, 54 are in principal the same or even identical. Both motion systems 60, 62 have comparable physical system parameters 42 respectively 52. This is for example a current i(t), a voltage u(t), a power p(t), an angular frequency ω(t) or such. Also those physical system parameters differ in their values, even they are in principal comparable. By using characteristic numbers 50, which are the same for both systems 60, 62 and which are comparable to of the system, a conversion inbetween both sets of physical system parameters 42, 52 can be applied in related cases. Thus it is possible to perform a measurement of the dynamic behaviour in the model of a motion control system 60 and to transfer the respective results to the "real" motion control system 62.
Fig. 3
shows examples 70 for dynamic controller behaviour. A controller in a closed loop control is characterized by command variable, which corresponds to its input and by an actuating variable, which corresponds to its output. If the input variable is changing, also the output variable is changing towards a new target output 72 in a dynamic process. The dynamic process of changing the output variable of the controller is describing the dynamic controller behaviour.
Dependent on the characteristic of the effective controller parameters the dynamic process of changing of the output variable might be slower as shown in the curve with reference number 76. Here a higher damping factor has been chosen so that the controller behaviour is one side rather slow, but on the other side there is no overshoot. The curve with the reference number 80 shows another variant of controller behaviour which is on one side faster but has on the other side a rather high overshoot.
<u>List of reference signs</u>
10
exemplary test arrangement for automatic operation of a real model of a motion control system
12
exemplary motion device
14
driven shaft
16
measuring device for angle of revolution
18
measuring device for torque
20
measuring device for power
22
data lines
24
exemplary controller
26
exemplary booster
28
communication lines
30
communication lines
32
computing unit
34
catalogue database
36
user interface
40
table of analogy inbetween model of motion control system and motion control system
42
exemplary physical parameters of motion control system
44
controller of motion control system
46
booster of motion control system
48
motion device of motion control system
50
exemplary set of characteristic numbers
52
exemplary physical parameters of motion control system
54
controller of motion control system
56
booster of motion control system
58
motion device of motion control system
60
exemplary motion control system
62
exemplary model of a motion control system
70
examples for dynamic controller behaviour
72
exemplary target output of controller
74
maximum acceptable output of controller
76
first example for controller behaviour
78
second example for controller behaviour
80
third example for controller behaviour
82
given band for overshoot
The curve with the reference number 78 shows a behaviour inbetween the behaviour of the curves 76 and 80. This is one side faster but has on the other side only a rather small overshoot. Such a controller behaviour can considered to be the best choice for most applications. An easy way of describing the dynamic behaviour of a controller consists in determining the overshoot. For example all controller behaviour having an overshoot within a given band 82 might considered to be acceptable. | |
We are looking for an experienced Production Manager (only males) to organize and oversee the manufacturing of goods. Candidates will be ultimately responsible for the smooth running of all production lines and the quality of output.
We expect employee to have deep know-how in production procedures. Ability to direct personnel towards maximum performance will set him apart as a leader. Decision-making and problem-solving will take up a great part. If you are up to it,wed like to talk to you.
Responsibilities
- Liaise with other managers to formulate objectives and understand requirements
- Estimate costs and prepare budgets
- Organize workflow to meet specifications and deadlines
- Monitor production to resolve issues
- Supervise and evaluate performance of production personnel (quality inspectors, workers etc.)
- Determine amount of necessary resources (workforce, raw materials etc.)
- Approve maintenance work, purchasing of equipment etc.
- Ensure output meets quality standards
- Enforce health and safety precautions
- Report to upper management
Salary: 35K to 40K
Experience: (8-10) years
Location: Kasba, Kolkata
Interested people may contact or apply.
Regards,
Payel (HR)
9875476754
Mail id: [email protected]
- Company Name: Square 1 Consulting Services Hiring For Export Company
- Company Description: | https://www.shine.com/jobs/production-manager-males-leathersshoes-and-garments/square-1-consulting-services/10653954/ |
Fashion Design is concerned with the design of clothing. Fashion Designers consider the shape, cut, silhouette and construction of clothing and tend to think more three dimensionally when designing. Fashion Design at the GSA aims to create highly specialized subject experts in an ‘expert amongst experts’ environment which values the interactive, synergetic and ever evolving nature of the specialism.
Our Fashion graduates have clear and individual creativity identities. They are able to position themselves and their ideas with knowledgeable authority in the fields of not only textiles and fashion but a range of other industries such as interior design, automotive industry and retail.
After Degree Show in Glasgow, Fashion Design graduates will be taking their collections to Graduate Fashion Week at Old Truman Brewery in London 3 - 6 June - more details here. | http://www.gsa.ac.uk/m/degree-show-2018/school-of-design/fashion-design/ |
Hand sewn & embroidered Otomi stocking.
The Otomi culture is one of the older complex Mesoamerican cultures, and lived peacefully near the Olmecs until the Nahua arrived around 1000 BCE. Otomi textiles, also known as tenangos take weeks or even years to embroider, and are said to be based off of cliff painting in the Tepehua-Otomi mountains in the area. Some say that they may be also inspired by cave paintings in the Mexican Plateau area.
Due to the individual nature of each hand sewn piece, patterns and colors vary.
Dimensions: ~18" x 6" | https://earthen-shop.com/products/hand-embroidered-otomi-stocking |
The invention regards a method and vehicle for assisting an operator of an ego-vehicle in controlling the ego-vehicle by determining a future behavior and an associated trajectory for the ego-vehicle to be executed at first determines a situation currently encountered by the ego-vehicle, the current situation comprising the ego-vehicle and at least one other vehicle. Then, probabilities of future behaviors of the at least one other vehicle are computed based on the current situation for predicting future behaviors of the at least one other vehicle. Additionally, potential future behaviors of the ego-vehicle are determined and probabilities of a plurality of future situations possibly evolving from the current situation are computed based on combinations of the predicted future behaviors of the at least one other vehicle and the potential future behaviors of the ego-vehicle. Then, trajectories for associated behaviors are optimized for the ego-vehicle for at least some of these possible future situations and a trajectory is selected based at least on future situation probability. Since each trajectory is associated with one potential future behavior of the ego-vehicle, this selection of a trajectrory also means a selection of a particular behavior. Finally, a control signal to output information to the driver about the selected trajectory and/or to control actuators of the ego-vehicle so that the ego-vehicle follows the selected trajectory is generated. | |
Former HKMA chief Joseph Yam discusses the implications of US-China relations on Hong Kong as an international finance centre, the removal of the territory’s special status and describes the targeting of foreign individuals and entities as weaponisation of finance.
- The US dollar’s dominance as the world’s reserve currency is under threat
- Global financial intermediation will continue despite worsening US-China relations
- The internationalisation of RMB and financing the Greater Bay Area (GBA) are vital to remain a global financial centre
United States (US) President Donald Trump floating the possibility of imposing sanctions on Hong Kong may have dire consequences for the US, warned Joseph Yam, the former chief executive of the Hong Kong Monetary Authority (HKMA).
Yam spoke and gave a media interview at an online forum organised by the Our Hong Kong Foundation, a think tank established in 2014 by former chief executive Tung Chee Hwa.
Yam believes any penalties may hurt Hong Kong but would likely impact America more if it were to use the US dollar as a weapon.
“If you weaponise, you restrict the use of the currency, restrict investors from investing in the US, or restricting American investors from investing in China, forbid Chinese firms from conducting fundraising in Hong Kong,” Yam said.
The US being the largest economy in the world and the dominance of the US dollar in the international monetary system means it has a powerful arsenal at its disposal.
Restricting the trading of the dollar is an option the US can do to Hong Kong, a move Yam refers to as ‘nuclear option,’ considering the grave damage it could unleash on the city’s financial standing.
While this alternative may lead to another global financial crisis, Yam also explained why it is highly unlikely to happen.
US dollar under threat
Yam believes such a move would put pressure on the dollar and the US' ability to repay its liabilities and potentially result in a default on its debts.
To drive his point, Yam presented a chart on the net international investment position (NIIP), overseas assets and liabilities, of a number of countries.
The NIIP is an important barometer of a nation’s financial condition and creditworthiness. A nation with a positive NIIP is a creditor nation, while a nation with a negative NIIP is a debtor nation. “The US is way at the bottom as a debtor country. It has borrowed money on a net basis amounting to $12 trillion. By far the US is the largest debtor in the world,” Yam pointed out.
He also showed a list of unstable monetary policies that the US has implemented over the years and referred to it as ‘cardinal sins’ in running an economy.
Yam then echoed American economist Stephen Roche who said, “The era of the US dollar’s exorbitant privilege as the world’s primary reserve currency is coming to end.”
Any action to undermine the Linked Exchange Rate System (LERS) of Hong Kong will backfire according to Yam. “That’s a two-edged sword, and I hope clever politicians like Trump will opt out of that alternative,” he added.
Global financial intermediation to continue
Yam expressed concern over US policy to end any preferential economic treatment for Hong Kong. Trump signed into law the Hong Kong Autonomy Act early this month and announced the end to special privileges in response to a new security law imposed by China in the semi-autonomous city.
Under the act, Yam pointed out that some people or companies could be targeted and banned for trading and doing business in the US.
While this could definitely undermine specific individuals and businesses, he said it should not make a dent in the city’s capital markets in general. Yam believes Hong Kong’s financial system is strong enough to withstand the impact of a partisan trade and investment prohibition. “There has been capital inflows into Hong Kong amounting to HKD120 billion ($15.48 billion). That actually is very clear. Our financial system and monetary system remain very robust,” Yam said.
In May, China’s legislature passed a controversial proposal that enforced a sweeping national security law in Hong Kong, a move many critics, including the US, believe could curtail political freedom, civil rights, and undermine autonomy. Yam thinks otherwise and stressed the law is necessary to restore order following months of political unrest any more than it being key to maintain Hong Kong’s viability as an international financial centre. “The demand for financial intermediation will continue to grow because China is the second largest economy in the world. If its financial needs in terms of investment and fundraising will grow, the rest of the world will have to deal with China,” Yam added.
RMB internationalisation and financing the Greater Bay Area
To foster the city’s unique advantages, Yam highly recommends accelerating the internationalisation of the renminbi (RMB) and further development of the Guangdong-Hong Kong-Macao Greater Bay Area (GBA). Promoting the wider use of the RMB in the capital markets has its advantages according to Yam - this will provide international investors and fundraisers an additional channel for managing currency risk and reduce the amount of capital flows thereby enhancing currency stability. Improving the GBA meantime will lead to greater capital mobility, currency convertibility, and connectivity of financial infrastructures, according to Yam. Another plus point for Hong Kong is its special relations with China’s huge economy that may well serve as the territory’s silver lining to maintain its position as an international financial centre. "Hong Kong is still the preferred place because after all we are still the biggest and most efficient offshore RMB market and centre for fundraising of Chinese (mainland) enterprises," Yam concluded. | https://www.theasianbanker.com/updates-and-articles/former-hkma-chief-weaponisation-of-finance-poses-substantial-risk-to-us |
The evolution of flow microscale reaction technology has led to a wide range of process intensification developments in unit operations used for chemical processing in specialty chemicals, pharmaceuticals and renewables. The key next step is the integration of these unit operations into end to end optimized continuous processes. The focus of this year’s meeting is on advanced process modeling technologies that will facilitate the efficient integration of unit operations for long term reliable production.
The meeting will begin with several perspectives on the current state of process modeling to build understanding of how it is effectively being developed and applied to insure long term optimized continuous production. The second part of the meeting will focus on flow technology advances in all phases of processing including advances in the use of homogeneous catalysis, separations and purifications, and new sections on the rapid developing areas of continuous fermentation and the control of the physical properties of solids. Advances in the use of process analytics to insure optimum performance in these developments will be discussed and should offer the way to effectively integrate these advances and tie them into the new process models.
For renewable materials the goal is to enable more efficient production processes that facilitate the conversion to a ‘green economy’. For traditional materials the goal is to make them as efficiently as possible to minimize waste and improve quality. Thus the meeting format is designed to facilitate discussions among the multifunctional experts presenting in the various areas while offering the potential of those new to a field to be exposed to new developments. This has catalyzed international collaborations in these important areas.
CPAC has an established track record in fostering academic and industrial interactions - to bridge the gap between basic research and full-scale process / product development. The official language for the workshop will be English. The registration fee will be $650 USD (550 euro). For more information please see the CPAC web site http://cpac.apl.washington.edu/event/CPAC+Rome+Workshop+2017 or http://mkcontrol.com/rome-workshop-2017.html or contact Mel Koch ([email protected]) or Nan Holmes ([email protected]). | http://mkcontrol.com/-rome-meeting-2017.html |
Published on: Modified:
Los Angeles (AFP)
The adventures of Chinese warrior “Mulan”, Disney’s much-anticipated blockbuster, will ultimately not be shown in theaters until streaming in September, as tradition dictates, the group said on Tuesday.
The release of the live-action version of the famous cartoon had already been postponed three times, with US cinemas closed due to the spread of the new coronavirus in the country.
“Mulan”
This movement has been called “unique” by Disney CEO Bob Chapek.
Before the pandemic, film studios traditionally waited 90 days to release their films on online platforms after their theatrical release.
The film will still be released in theaters in some countries where the streaming service is not yet offered, such as China.
Production costs about $ 200 million. | https://vaaju.com/morroco/pandemic-forces-mulan-will-be-released-directly-on-the-small-screen/ |
“Social priming” has recently been one of the most controversial topics in psychological science. With failures to replicate proliferating, the field has been called a train-wreck. But what exactly is it?
The term “social priming” refers to the idea that subtle cues can exert large, unconscious influences on human behaviour. The classic example of a social priming effect was the “professor priming” study in which volunteers who completed a task in which they had to describe a typical professor, subsequently performed better on a general knowledge task. In other words, as the authors put it, “priming a stereotype or trait leads to complex overt behavior in line with this activated stereotype or trait.”
Now, in a new preprint, Andrew Rivers and Jeff Sherman criticize me, and others, for using the term ‘social priming’ so as to exclude less controversial priming effects from the definition. Rivers and Sherman make several other points in their piece, but in this post I’m going to focus on the issue of definition.
Discussing my 2016 post, in which I argued that a just-published study by Payne et al. did not provide evidence of social priming, because the method was (I thought) very different from a typical social priming experiment, Rivers and Sherman say the following:
Indeed, there are many differences between the gambling paradigm developed by Payne and colleagues (2016) and more frequently discussed paradigms such as Bargh et al. (1996). However we are unable to determine why the label ‘social priming’ applies to one type or another… To this point, no one has provided clear guidance as to what kinds of effects should ‘count’ as ‘social priming’.
Wyer and Strull primed participants (using a sentence construction task) with concepts such as ‘hostility’ or ‘kindness’ before asking participants to read a vignette about a man (‘Donald’) and then to describe his actions. ‘Hostility’ primes led people to view ‘Donald’ as being more hostile and generally less sympathetic, while kindness priming had the reverse effect.
Smith called Wyer and Strull’s paradigm “social priming” because it concerned social judgment (or perception) – evaluating another person, ‘Donald’. “Social priming” was a good description in this context, but ironically, Wyer and Strull (1981) would not be considered a paradigmatic example of “social priming” today, because it didn’t show that priming affected the participants’ own behaviour.
Following Smith (1984), the first classic “social priming” study in the modern sense appeared in 1996: Bargh, Chen and Burrows, the infamous ‘elderly priming’ study. Bargh et al. used methods similar to Wyer & Strull’s, but with a key difference: they reported that priming a social concept caused participants to enact it, e.g. priming rudeness made people act rudely to an experimenter. Yet Bargh et al. didn’t mention “social priming”, instead referring to its findings as showing “automatic behavior priming” or “stereotype assimilation”.
As far as I can see, it was not until 2005 that the term “social priming” was applied to the kind of studies that have lately become punchbags. John Bargh used “social priming” in this way in his 2005 book, and also in 2005, other authors used it in the same way. By 2012, a review article on social priming effects made no mention of Wyer and Strull at all.
The final stage in the development of “social priming” was when the term expanded to include not only stereotype-based effects, such as in Bargh’s work, but to include priming by mere images or reminders of concepts such as money and sex/romance. I suspect this first occurred outside the scholarly literature, but it had happened by 2015 when a Nature news piece defined “social priming” as the idea that “certain behaviours are claimed to be modified unconsciously by previous exposure to stimuli, such as an American flag, or thinking about money”.
*
So what can we take from this? I think it clear that “social priming”, as the term is currently used, is a misnomer. There is nothing especially social about priming with money and becoming greedy, and very little that is social about priming with an elderly stereotype and then walking slower.
But what other term can we use? Bargh et al.’s “automatic behavior priming” is quite nice, but taken literally, it could apply to all kinds of priming, even classical semantic priming effects (pushing a button is a behaviour, and the prime is automatic).
In my view, the essence of what we call “social priming” is that the prime does not prime people to give a particular response (as in semantic priming) but rather, the prime is said to change the participants’ behaviour in a more global sense. Money priming, for instance, is claimed not just to make people want money, but (amongst other things) to make them “assert more strongly that victims deserve their fate”. It’s an abstract, global change in mental or behavioural state which could manifest in many ways.
Therefore, perhaps the most appropriate term for this kind of work would be global state priming?
“psychological science” Aw, c’mon. Political Science was utterly clueless about most astounding Presidential election since Dewey beat Truman. Psychology is worse. When psychology is blatantly not for sale it voluminously disgorges faerie tales and PhD “therapists” (“the rapists”).
Given “psychological science,” why are our prisons full to bursting?
“Priming” Beat on me. Will I descend into depression, tear out your throat, or look into the sky at a contrail…then say “choo-choo”? The proper psychological choice is “back roll.” It’s in the literature.
Neuroskeptic
About Neuroskeptic
Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens. | |
The present invention relates to rigid telescopically arranged multi-cavity dispensing containers for flowable material, such as tooth paste and the like, from which it is desired to dispense simultaneously two or more reactive substances which require separate storage until time of use. More particularly it relates to a disposable refill cartridge for use in a dispensing container of the above type which can be used in conjunction with a reusable base.
There exists a desire to provide sodium bicarbonate and peroxide gel as components of toothpaste. Sodium bicarbonate is a well known and commonly used abrasive and cleaner. Peroxide gel is regarded as a beneficial ingredient to help promote healthy gums. These components are reactive when mixed, and therefore must be maintained separately until time of use.
U.S Pat. Nos. 5,020,694 to Pettengill and U.S. Patent No. 5,038,963, to Pettengill and Gentile, which are hereby incorporated by reference, disclose rigid piston type multi-cavity dispensing containers for the simultaneous coextrusion in predetermined proportions of two or more materials which may have different rheologies. The lower body members have a base and two or more pistons attached thereto. The upper body members have parallel cylinders for containing the flowable materials, and outlet means for dispensing the materials. The lower end of the cylinders receive the pistons whereby the relative compression of upper and lower body members forces the flowable materials out through the outlet means. This produces a single, banded unmixed stream of material that can be neatly and easily applied onto the narrow width of a toothbrush. The upper and lower members cannot be sold as separate independent units, because the piston heads which are attached to the lower member are needed to seal the flowable material within the upper member. In addition, ridges formed on both the upper and lower body members prevent them from being pulled apart. Thus both upper and lower body members must be discarded afterthe contents of dispenser are used up. This produces unnecessary waste and is not the most economical of arrangements.
Thus it is an object of this invention, for ecological and economical reasons, to provide a multi-cavity dispensing refill cartridge which can be used in conjunction with a reusable base for the simultaneous coextrusion, in predetermined portions, of two or more flowable materials, which may have different rheologies, upon the relative compression of the refill cartridge and the reusable base, to produce a single, banded unmixed stream of material that can neatly and easily be applied onto the narrow width of a toothbrush. It is a further object to provide such a refill cartridge which, in conjunction with a reusable base, dispenses a single stream of unmixed material and which provides segregation of the component materials within the dispenser both prior to and after dispensing.
Thus, according to a first aspect of the invention, there is provided a multi-cavity dispensing refill cartridge,for use with a reusable base unit, for the coextrusion of at least two flowable materials, comprising a dispensing cartridge comprising at least two hollow and separate parallel cylinders, each cylinder for containing one of the flowable materials, the cylinders having a first generally closed end and a second end telescopically and slidingly accommodating least two parallel piston heads which conform to ride sealingly along the interior walls of the cylinders so as to force the flowable materials to flow toward the first end of the cylinder upon relative compression of the cylinders and piston heads, the piston heads being compressably engageable with piston rods of a reusable base unit, the cylinders having outlet channels at the closed end, the refill cartridge further comprising means for selectively engaging a reusable base unit and an outlet means in fluid communication with the outlet channels, the outlet means including adjacent outlet openings unconnected to each other and having means for causing the flowable material to flow toward each other at the outlet openings to form a single banded, unmixed stream of the materials outside of the outlet means.
Afirst catching ridge formed on the refill cartridge may engage another catching ridge on the reusable base to prevent the unintended separation of the refill cartridge from the reusable base. The first catching ridge may be located on a flexible portion of the refill cartridge so that the two catching ridges may be pressed apart to allow separation of the refill cartridge from the reusable base.
The cylinders may be incorporated in a shroud designed to conform with a shroud of the reusable base. In addition, the conforming shrouds may include corresponding longitudinal axial grooves and ridges for linearly guiding axial movement between the refill cartridge and the reusable base. In a preferred arrangement, the shroud of the refill cartridge fits within the shroud of the reusable base and the catching ridge of the refill cartridge is located on-a longitudinal axial ridge of the refill cartridge, and the catching ridge of the reusable base is located on a longitudinal groove of the reusable base. In addition, in this preferred arrangement, the catching ridge of the refill cartridge has adjacent slots so that catching ridge can be pushed inward to allow the refill cartridge to be pulled apart from the reusable base.
Fig. 1 is an exploded view of a multi-cavity dispensing refill cartridge and a reusable base to be used in conjunction therewith.
Fig. 2 is a sectional view of the Fig. 1 upper shroud taken through the outlet channels.
Fig. 3 is a frontal cutaway view of the Fig. 1 refill cartridge.
Fig. 4 is a side view of a refill cartridge and reusable base having front and back guide means.
Fig. 5 is a frontal cutaway view of a refill cartridge mounted on a reusable base.
Figs. 6a and 6b are respectively a sectional side view and a bottom view of a piston head.
Fig. 7 is a perspective view of an outlet assembly.
Fig. 8 is a sectional view of an outlet assembly.
Fig. 9 is a frontal view of a nozzle with an opened, hinged cap.
Fig. 10 is a sectional view from the side of the Fig. 9 nozzle.
Fig. 11 is a sectional view from the side of a reusable lower body.
Fig. 12 is a perspective view of a refill cartridge with a window.
Fig. 13 is an alternate two piece outlet assembly.
Fig. 14 is a cross-sectional view oft he Fig. 13 outlet assembly.
Fig. 15 is an enlarged interior end view of the nozzle member of the outlet of Fig. 13.
The invention will now be further described by way of example only, with reference to the accompanying drawings, in which:
An exploded view of a multi-cavity dispensing refill cartridge 1 and a reusable multifunction base 2, to be used in conjunction therewith, is shown in Fig. 1. The refill dispensing cartridge 1 has a upper shroud 3 which incorporates two hollow, separate, parallel cylinders which each contain one of two reactive flowable materials. Two cylindrical outlet channels 12 provide fluid communication between the cylinders and outlet assembly 5. Hinged cap 34 seals the outlet assembly 5. A sectional view of the Fig. 1 shroud 3, taken through the centre of the outlet channels 12, showing the two parallel cylinders 6, and their communication with the outlet channels 12, is shown in Fig. 2.
The multifunction base 2 to which the parallel cylinders 6 are attached allows the device to stand upright, and also provides leverage for the device when the cartridge 1 is pressed by a user, facilitating single handed usage and dispensing from the device. Multifunction base 2 additionally rigidly retains parallel piston rods 8 so as to provide for the smooth, equal and simultaneous movement of piston heads 4 into the cylinders 6 during operation.
Referring back to Fig. 1, the top end 41 of each cylinder is generally closed except for the outlet channels 12. The bottom end 42 of each cylinder is sealed by a piston head. Thus the flowable materials are completely sealed within the refill cartridge allowing it to be handled and sold as a unit separate and independent from the reusable base 2.
Referring to Fig. 3, which is a frontal cutaway view of a refill cartridge, it can be seen that each cylinder 6 telescopically and slidingly accommgdates a piston head 4 which conforms to ride sealingly within the inner walls 7 of the cylinders 6. Each piston head 4 has a central portion 11 designed to be received by openings in piston rods of the reusable base.
Referring back to Fig. 1 it can be seen that the reusable base 2 has a pair of parallel piston rods 8, with openings 10 designed to surround the lower portion 11 of the piston heads 4. The openings 10 are dimensioned such that central portion 11 can easily slide into and out of them. This makes the piston heads 4, which are initially located in refill cartridge 1, compressably engageable with piston rods. 8. In other words, when the refill cartridge 1 and the reusable base 2 are compressed together, edge 47 of the piston rods 8 abuts and presses upon the bottom portion 48 of piston heads 4 as shown in Fig. 5.
Referring to Figs. 6a and 6b, which are respectively a sectional view from the side and bottom view of a piston head 4, it can be seen that bottom portion 48 of the piston head 4 is the bottom edges of ribs 74 which extend between the cap portion 76 of piston head 4 and central portion 11. In addition to providing a surface for edges 47 of the piston rods 8 to press against, ribs 74 also contribute to the structural stiffness of the piston heads 4.
When the refill cartridge and the reusable base are pulled apart, central portions 11 of the piston heads 4 slide easily out of openings 10. Thus the piston heads 4 only engage the piston rods 8 when refill cartridge 1 and reusable base 2 are compressed.
Other compressably engageable arrangements are also possible. For example, the piston heads 4 could be provided with a flat bottom which could engage a flat top of the piston rods 8. The provision of lower portions 11 and openings 10, however, help to keep the piston heads properly oriented within the cylinders 6.
As shown in Fig. 1, reusable base 2 is dimensioned to telescopically receive refill cartridge 1. Specifically, upper shroud 3 is arranged to closely conform in sliding relation with lower shroud 9. The upper and lower shrouds 3 and 9 include means for guiding linear motion between the refill cartridge 1 and reusable base 2, shown as conforming longitudinal projecting ridges 51 and 52. These ridges are longitudinal, outward, rectangular extensions of the shrouds 3 and 9 having parallel side walls and flat facing surfaces. The projecting ridge 52 of the lower shroud 9, which forms an inner groove 55, is dimensioned to receive longitudinal ridge 51 of the upper shroud 3. When the two shrouds are assembled and compressed the longitudinal ridges 51 and 52 serve to guide the relative motion of the refill cartridge and the reusable base, preventing their relative rocking and providing smooth, equal, linear motion of the piston heads even where the materials in the two cylinders have different rheologies.
The shrouds 3 and 9 may be provided with longitudinal projecting ridges on both their front and back sides. These are shown as 51 a, 51 b, 52a, 52b in Fig. 4. In addition, front ridges 51 a and 52a may have different widths then back ridges 51b and 52b. These then serve to orient the reusable base with respect to the refill cartridge. This is especially useful when the refill cartridge is to be used with a reusable base which has an extension such as extension 57 designed to prevent the forward tipping of the reusable base 2 and refill cartridge 1 when downward pressure is applied to the refill cartridge 1.
It is understood that the means for guiding linear motion between the refill cartridge and the reusable base may be of any acceptable shape and comprise a plurality of extensions, both inward and outward. In addition to providing guided relative motion of the shrouds, the extensions improve the mechanical rigidity of the shrouds.
Referring to Fig. 5, when refill cartridge 1 and the reusable base 2 are compressed, piston rods 8 simultaneously and equally push upwards the piston heads 4 thereby forcing the flowable reactive materials upwards and into the outlet channels 12 and through outlet assembly 5. Flowable materials from each of the outlet channels 12 is received by outlet assembly 5, shown in Figs. 7 and 8, which provides for forward facing dispensing nozzle. Outlet assembly 5 is fitted about outlet channels 12 and converges so as to end in an outlet passage 14. Outlet passage 14 has two passageways 15, each of which connects through one of the outlet channels 12 to one of the two cylinders 6. The outlet passage 14 of outlet assembly 5 is arranged to receive a separate nozzle 16, which together comprise the outlet means 17 as shown in Fig. 1.
The outlet channels 12 receive sleeves 18 of the out assembly 5 shown in Figs. 7 and 8. As the tube sleeves 18 converge, they form a common rigid barrier 19. The outlet passage 14 is bisected by a flat rigid septum 20 extending from the barrier 19, sitting fixedly within the inner walls of the outlet passage 14 and projecting therefrom. The septum 20 is tapered cross-sectionally and ends in a straight edge 22. The cross-section of the septum edge 22 is a sharp angle approximated by a very small radius. The sides of the septum are preferably textured, for example by vapour honing,to a dull finish to promote adherence of the products thereto, which together with the taper causes the product streams to converge into a single stream as they emerge from adjacent outlet openings 23 shown in Fig. 9.
The septum 20 acts to keep the two reactive materials separate as they emerge from the cylinders 6 and also prevents reaction and obstruction of the outlet means 17 by reaction products. The flowable materials converge as they flow through the outlet means 17 but the two streams do not meet until they have fully left the outlet means opening 23. The taper design of the septum 20 causes the two streams to gradually converge until they meet at the septum edge 22 beyond the end of the outlet means opening 23. At this point they smoothly touch and continue to flow onto the intended surface, e.g. toothbrush, as a single, substantially cylindrical, banded stream. This stream is convenient and easy to direct with accuracy upon a limited surface area.
The diameter of the emerging stream may be regulated according to the packaging specifications. For example, nozzle 16, shown in Figs. 1, 9 and 10, which snaps on around the outlet passage 14 by engaging ridge 38 may have an interior taper which reduces the effective outlet passage diameter as shown in Fig. 10. In such an embodiment, the length of the septum edge 22 may be reduced and the side edges of the septum conform to the converging inner shape 35 of the nozzle 16.
With reference to Fig. 9, nozzle 16 is provided with longitudinal grooves 37 along its converging inner wall for retaining the inward sloping sides of the septum 20 residing therein. Such an arrangement maintains the septum 20 within a rigid position within the outlet means 17 and prevents intermixing of the streams at contact points of the assembled septum 20 and outlet means 17. The septum 20 extends to a location preferably 0.005 to 0.010 inches beyond the outlet means opening 23.
The nozzle 16 preferably has a cap 34 connected thereto by a hinge 33. Cap 34 includes a complimentary, engaging means comprising recesses 31a and 22a for receiving respectively nozzle rim 31 and septum edge 22 during closure, so that intermixing of the two substances is prevented once the cap is closed.
In an important aspect of the invention, the outlet-means 17 is provided with one or more means for causing the outlet streams to flow toward each other and avoid the otherwise uncontrolled outlet flow which can result in the streams of the two or more materials flowing away from each other as they emerge from the outlet opening. The means may include a tapered septum 20 which divides the outlet, tapered peripheral walls on the outlet means as exemplified by nozzle 16, a differential surface resistance on the interior walls of the outlet means, such that greater surface resistance is provided on the interior surfaces which are adjacent to other outlet openings than on the peripheral interior surfaces of the outlet means, or any combination of these features. Thus the surfaces of the septum 20 may be provided with a dull finish, such as by vapour honing, while the interior peripheral surfaces of nozzle 16 remain smooth. As the materials flow over the surfaces there will be greater assistance to the flow over the septum, tending to cause the flow of materials to "curl" in the direction of the septum as they emerge from the outlet, whereby the two or more streams of material curl towards each other and converge into a single stream.
Alternatively, the interior peripheral surfaces of the outlet means can be treated, e.g. with a lubricant, such as polytetrafluoroethylene or silicone materials to reduce the surface friction of the interior peripheral surfaces as compared to the surface friction of the septum 20.
Referring once again to Fig. 1, it can be seen that means for selectively engaging reusable base 2, shown as horizontally extending catching ridge 61, protrudes outward from the longitudinal extending ridge 51 on the front side of upper shroud 3. This ridge is used to keep the upper shroud 3 from disengaging from the lower shroud 9 when the dispenser is lifted by the upper shroud 3. As shown in Fig. 11, a corresponding catching rib 62 projects inward from the inside groove 55 of longitudinal extending ridge 52a of lower shroud 9. The two catching ridges engage when the upper and lower shrouds are pulled apart so as to prevent their separation. Referring back to Fig. 1, it can be seen that two vertically oriented slots 65 may be formed on either side of catching ridge 61. This makes the area around catching ridge 61 relatively flexible such that when the area 64 above it is pressed upon by a thumb or finger, catching ridge 61 moves inward sufficiently so that it will not engage with catching ridge 62 when refill cartridge 1 is pulled apart from reusable base 2. This allows the refill cartridge to be removed from the reusable base when it has been emptied so that it may be replaced with a full cartridge.
As an alternative to the vertical slots 65, the area around the catching ridge 61 may be made sufficiently flexible by making it thinner than the rest of the upper shroud 3.
As an alternative to the catching ridge 61, upper shroud 3 may be provided with a window 68 as shown in Fig. 12.)nthisembodimenttheioweredge69ofthe windows serves as a means for selectively engaging the reusable base 2 via catching ridge 62. Once again vertical slots 65 on either side of the window 68 make the area relatively flexible such that when area 64 is pressed upon by a thumb orfinger, the lower edge 69 moves inward sufficiently so that itwill not engage the catching ridge 62 when refill cartridge 1 is pulled apart from reusable base 2. The upper edge 70 of the window is bevelled so that it does not prevent the relative compression of the refill cartridge 1 and the reusable base 2.
A preferred embodiment of outlet assembly 5 is shown in Figs. 13, 14 and 15. Outlet assembly 5, as shown in Figs. 7 and 8, includes a projecting thin septum 20, which may pose difficulties in fabrication. In the alternate embodiment of Figs. 13 through 15 the outlet assembly 5 is fabricated of connecting part 242 and nozzle member 252, and the use of a thin projecting septum is eliminated.
Connecting part 242 includes a housing arrangement similar to that of part 5 which engages projecting outlets 12 of the refill cartridge and includes sleeves 241 which have internal passages 254 and 256. Acyl- indrical extension 244 of connecting part 242 includes an interior septum 262 extending to the forward end thereof and separating internal outlet passages 258 and 260 which are respectively connected to inlet passages 254 and 256.
A separate nozzle member 252 is arranged to snap fit over cylindrical extension 244 of connecting part 242. To facilitate the snap fit in an appropriate rotational orientation, cylindrical extension 244 is provided with an engaging rib 246 and triangular shaped locating protrusions 250. Nozzle member 252 has a recess portion 264 with an interior rib 266, shown in Fig. 14 which is engaged by rib 246. As shown in Fig. 15 interior rib 266 only extends partially around the peripheral nozzle member 252 whereby gaps are formed to receive triangular locating ridge 250 to assure appropriate angular orientation of nozzle member 252 when is fitted over extension 244.
Nozzle member 252 includes a nozzle portion 269, which is circular in cross-section and includes septum 270 which bifurcates nozzle 269 into channels 272 and 274. Septum 270 is preferably tapered and textured as described above and extends to the outlet opening of nozzle member 252. The interior ends of channels 272 and 274 within recess 264 include projecting ribs 276 which form grooves 280 and 278 for receiving respectively the edges of the peripheral walls of extension 244 and septum 262. Tapered ridges 268 on the interior wall of recess 264 are arranged to press the peripheral edges of extension 244 of member 242 into close fit with ridges 276. In an exemplary embodiment four such tapered ridges are provided at equal spacing around recess 264.
Nozzle member 252 is provided with a snap fit cap pivotally mounted thereto having a configuration similar to cap 34.
When assembled, the peripheral edges of channels 258 and 260 are guided into the proper orientation of grooves 280 and 278 by triangular ridges 250 acting in conjunction with interior ridges 266. When fully inserted ridge 266 snaps behind ridge 240 and the forward edges of extension 244 are pressed close to projections 276 by ribs 268, forming a close fit between outlet passages 258 and 260 of connecting 242 and passages 272 and 274 of nozzle member 252. | |
Abbeywood Community School is committed to raising the standards of Numeracy of all its students, so that they develop the ability to use Numeracy skills in all areas of the curriculum and the skills necessary to cope confidently with the demands of further education, employment and adult life.
Numeracy is a proficiency which is developed mainly in mathematics but also in other subjects. It is more than an ability to do basic arithmetic. It involves developing confidence and competence with numbers and measures. It requires understanding of the number system, a repertoire of mathematical techniques, and an inclination and ability to solve quantitative or spatial problems in a range of contexts. Numeracy also demands an understanding of the ways in which data are gathered by counting and measuring, and presented in graphs, diagrams, charts and tables.
A key part of numeracy at Abbeywood Community School is making provision for those students who have not reached the expected level for primary students, and so implementing strategies to enable students to catch-up. Not only do we aim to catch-up those students we also aim to extend the high ability students with problem solving questions, with one eye looking at their future involvement in the UKMT which is highly regarded here and nationally. | https://www.abbeywoodschool.com/Numeracy/ |
Day & Night Kindergarten Activities
Introducing kindergarten students to day and night typically involves providing students to explore the difference between daytime and nighttime and discussing the reasons it is sunny during the day and dark at night in simple terms. As part of this teaching unit, schedule a trip to a local planetarium to provide students with more information about day and night.
1 Day and Night Sky
Ask your students to share their thoughts on how the sky looks different during the daytime and nighttime. Discuss how the Sun is seen during the day, as well as clouds and a typically blue sky and how, at night, the Moon and stars are often seen and the sky looks black. Give students the chance to create their own night and day sky by giving them two sheets of construction paper, blue for the day sky and black for the night sky, and having them illustrate each paper to reflect the daytime and nighttime sky.
2 Day and Night Activities
Explain to your class that there are different activities you do during the day and at night to help them differentiate between the two. Discuss how you go to school during the day and go to bed at night and allow students to share their own ideas. Talk about how students know it is daytime or nighttime by looking at the sky. Provide students with the opportunity to write or draw about an activity they only do during the day and one they only do at night. Once the class has completed their work, invite students to share their activities.
3 Spinning Earth
Introduce students to the concept of the Earth's rotation through experimentation and giving them the chance to act as the Earth. Turn off your classroom lights and turn on a flashlight to use as the sun, pointing it toward your class. Explain to students that their face is where they live and that it is daytime where they live because they can see the Sun. Ask students how they can make it nighttime by not turning off the flashlight. After students have discovered they must turn around, discuss how the Earth rotates just like they did, making it day or night.
4 Sun and Moon Journal
Have students keep a Sun and Moon journal to track how the Sun changes position throughout the day and how the Moon looks different throughout the month. Keep your Sun journal at school and have students track the Sun's moving position through shadow length and direct observation. Allow students to take their Moon journal home and encourage them to draw what the Moon looks like each night before bed. After a full Moon cycle, ask students to bring their journals back to school and discuss the difference between the Sun and Moon journals, and your students' discoveries. | https://classroom.synonym.com/day-night-kindergarten-activities-7836804.html |
Over 1,500 years ago, the Gupta emperors ruled large parts of India. They helped consolidate the nation, but they also popularized India’s caste system, making it socially unacceptable for people to marry outside their castes. This ancient history hinted at in various linguistic, archaeological and genetic studies has been confirmed by a path-breaking genetic study recently published.
India’s present diverse population arose from five types of ancient populations that freely mixed and interbred for thousands of years.
It was earlier believed based on similar studies that Indian ancestors were from only two populations – Indo-European (ANI), Dravidian (ASI). But this study has been able to provide evidence that four ancestral stocks contributed to the genetic diversity of present-day Indians – Indo-European (ANI), Dravidian (ASI), Tibeto-Burman (north-east India) and Austro-Asiatic (fragmented in east and central India; spoken exclusively by the tribals). A fifth ancestral lineage that is dominant among the Negrito tribals (Jarawa and Onge) of Andaman and Nicobar Islands was also identified by the scientists.
What the study also unearthed was the deep imprint of a significant social cultural process in Indian society. It found that interbreeding between communities `abruptly’ ended around 70 generations ago, which translates to about 1,575 years ago, sometime in the 6th century. It coincided with the period when the Gupta Empire ruled India. This period had seen the consolidation and supremacy of the caste system, entrenched through the sanction of scriptures as well as enforcing mechanisms of the rulers.
The reign of the ardent Gupta rulers, known as the age of Vedic Brahminism, was marked by strictures laid down in Dharmasastra—the ancient compendium of moral laws and principles for religious duty and righteous conduct to be followed by a Hindu—and enforced through the powerful state machinery of a developing political economy.
Genetic analysis also revealed that in many parts interbreeding across caste rigidities continued for some time, as in Bengal and Maharashtra. The establishment of endogamy among tribal populations was less uniform. In the case of West Bengal Brahmins, marriages with the northeastern communities continued until the arrival of the 8th century Pala dynasty which cut off these regions.
By identifying five ancestral populations among contemporary Indians, the researchers have revealed that Indians today are more genetically diverse than we’ve realized. But they have also shown that social shifts can dramatically affect a nation’s genomes. The caste system has consequences that affect people all the way down to their DNA.
The caste system originated in Vedic times, perhaps 1500 BCE or earlier. It must have slowly spread and got entrenched over centuries. Its impact on genetic material becomes evident around 1600 years ago.
So Ironic that the caste system began in “vedic times”… Hinduism in it’s pure vedanta forms is clearly contrary. | http://www.mysteryofindia.com/2016/03/caste-system-originated-gupta-dynasty-study.html |
Aditi hits her head to the wall if someone says No to her. Rhea gets upset whenever she hears the doorbell, she covers her ears and screams. Ishaan refuses to wear shoes and throws them off the minute you’ve fixed them on. These are some common situations parents of children with Autism may find themselves in. Sometimes, traits we see in children with Autism can lead to behaviours that we may find difficult to understand or manage.
Definition of Challenging Behaviours
While challenging behaviours might be differently defined for different people, we can broadly define these behaviours as those that are harmful (to the individual or others),destructive, cause others to label or isolate the individual for being odd or different and prevent an individual from learning and participating in aspects of community life.
These challenging behaviours can occur at any point in an individual’s life. With therapy and intervention, children with autism learn to better manage and adapt their symptoms; however certain changes or situations can result in new challenging behaviours.
Concerns over these Challenging Behaviours
Most children with Autism will display challenging behaviours at some point in time and without the right knowledge and guidance, parents might find these behaviours unmanageable and experience considerable concern and stress. These behaviours might also cause safety concerns, harm or damage. Parents, while feeling helpless might also feel responsible for causing these behaviours but it is important to know you are not responsible for these challenging behaviours.
If these behaviours aren’t addressed correctly, they can become a larger safety concern as the child grows older and stronger physically and can lead to crises. Children might use aggression as a way to avoid certain situations as they become physically stronger and can overpower their caregivers. Certain behaviours might also become more difficult to manage as the child hits puberty.
What causes these Challenging Behaviours
It is important to note that Autism by itself doesn’t cause difficult behaviours. However, the core characteristics of Autism present challenges in communication. Being unable to express one's feelings and needs can lead to frustration, anxiety, confusion, a lack of control, which results in these difficult behaviours. All behaviour is communication, therefore, children with autism often unknowingly might voice their concerns, stress and frustration through behaviours, instead of words.
A child's past experiences shape their responses and this can cause challenging behaviours to occur repeatedly. If a child has learned that screaming and hitting gets them out of a difficult task, they are more likely to repeat this behaviour the next time a similar situation crops up.
Children with autism learn differently, as they may not have yet developed the skills and abilities that children generally use. Therefore, usual methods of behaviour correction may not work with children with autism. Proper knowledge and guidance can help parents understand and manage these behaviours more effectively and also help children with autism and their caregivers to feel safe and supported at all times while living a life with purpose and dignity.
Common Challenging Behaviours
Although challenging behaviours can vary from individual to individual, below are some of the most common ones seen among children with Autism. Some might occur more frequently while others may be less common. The intensity with which these behaviours occur will also vary, according to the environment and may change over time. However, having knowledge of these terms and being able to use them to describe them to a professional can help parents.
Disruption
Occurs when an individual exhibits inappropriate behaviours that interfere with the function and flow of his surroundings. Examples include interrupting a classroom or a parent’s ability to make a meal. Behaviours might include banging, kicking, throwing objects, tearing things or yelling.
Elopement
Refers to running away and not coming back to where the person started. In autism, it is used to describe a situation where a person leaves a safe space, a caregiver or a supervised situation either by wandering, sneaking or running away
Non-compliance
Is when an individual refuses to follow directions, rules or wishes of someone else. Although at times it can be purposeful, at times it can also be caused by a lack of understanding, motivation or fatigue or poor motor coordination.
Obsessions, compulsions and rituals
Are described as strong, irresistible urges that can result in difficulty in a person’s ability to cooperate, manage change to be flexible or adjust.
Physical Aggression
Is using force that may cause harm or injury to another person and might include biting, kicking, pulling hair, scratching or throwing things.
Property Destruction
Involves behaviour where property or belongings are harmed, ruined or destroyed.
Self-injury
Is an attempt or act of causing harm to oneself that is severe enough to cause damage. This can happen through headbanging, hitting, pinching or biting oneself, wound picking or other forms of self-harm.
Tantrums or meltdowns
Describe an emotional outburst that might involve crying, yelling screaming or defiant behaviour. The person might have difficulty calming down even after the desired outcome has been achieved.
It is important to understand why the behaviour occurs and what purpose it serves, what we can call the “function” of the behaviour. Generally it can serve one of the following functions.
- Obtaining a desired object or outcome
- Escaping a task or situation
- Getting attention
- Trying to self-calm or feel good
- Responding to pain or discomfort
- Attempting to gain control of a situation
To solve behavioural concerns it is often helpful to understand what happens before and after a certain behaviour or by changing the situation or environment. And since behaviour is after all a form of communication, it is important to teach the child more adaptive and appropriate ways of communicating one’s needs or expressing one’s feelings. Without timely and proper intervention, challenging behaviours can get worse over time, so it is important to address them at the earliest. With the right guidance and intervention, children can develop the skills and tools to effectively express themselves.
If you are concerned about your child’s behaviour and need help, do not hesitate to reach out to our Specialists at KinderPass.
Resource: Challenging Behaviours Toolkit By Autism Speaks
Loved this article?
Still got questions? | https://www.mykinderpass.com/parenting-tips/article/556/Challenging-Behaviours-In-Children-With-Autism |
Background: Due to the limited force feedback provided by laparoscopic instruments, surgeons may have difficulty in applying the appropriate force on the tissue. The aim of this study was to determine the influence of force feedback and visual feedback on the exerted pinch force.
Methods: A grasper with a force sensor in the jaws was developed. Subjects with and without laparoscopic experience grasped and pulled pig bowel with a force of 5 N. The applied pinch force was measured during tasks of 1-s and 1-min duration. Visual feedback was provided in half the measurements. Force feedback was adjusted by changing the mechanical efficiency of the forceps from 30% to 90%.
Results: The mean pinch force applied was 6.8 N (+/-0.5), whereas the force to prevent slippage was 3.0 N (+/-0.4). Improving the mechanical efficiency had no effect on the pinch force for the 1-s measurements. The amount of excessive pinch force when holding tissue for 1 min was lower at 30% mechanical efficiency compared with 90% (105% vs 131%, p = 0.04). The tissue slipped more often when the subject had no visual feedback (2% vs 8%, p = 0.02).
Conclusion: Force feedback and visual feedback play a more limited role than expected in the task of grasping tissue with laparoscopic forceps. | https://pubmed.ncbi.nlm.nih.gov/15108104/ |
Papercut Artworks by Dmytro & Iuliia
Dmytro and Iuliia from Kiev Ukraine create beautiful handmade papercut items. Their hand-cut paper silhouettes are made all from a single sheet of paper where every piece is interconnected. This takes them some hours and sometime several days to complete their artwork.
Unique Hand Cut Paper Art by Lisa Rodden
Lisa Rodden, originally from Sydney and now based in the Sunshine Coast cuts, slices, and folds thick layers of white paper on top of acrylic painting that is occasionally accompanied with text. She has been creating since she could hold a pencil. Her work is…
Paper-cut Animals & Birds Silhouettes
Paper artist Joe Bugley has inspired us with his creative paper-cut bicycles and paper-cut world maps in the past, now in this post I have compiled the paper-cut silhouettes of animals and bugs. The detail and finishing in the work is outstanding, have a look….
Amazing Paper-cut Bicycles By Joe Bagley
I have showcased paper-cut world maps silhouettes by Joe Bagley in the past and today its time to get inspiration from paper-cut bicycles he made. | https://www.inspirefusion.com/tag/paper-cut-artworks/page/4/ |
Lori W. Gordon is a senior project engineer and technology strategist at The Aerospace Corporation specializing in cyber and physical infrastructure protection. In the course of her career she has led efforts to coordinate national strategies and initiatives to enhance the resilience of U.S. critical infrastructure, mitigate national security and civil space enterprise risk, and accelerate technology innovation in complex systems. She has advised U.S. and international standards development organizations and academic curriculum boards on autonomous systems, cybersecurity, and the next generation workforce. Gordon has a master’s in public administration from the University of Massachusetts and a bachelor’s in geography from the University of Maryland. She is a Partner with The Aerospace Corporation’s Center for Space Policy and Strategy and is a Visiting Fellow at the National Security Institute.Ian Canning
Ian Canning joined OneWeb Technologies as Chief Operating Officer in January 2012.
Mr. Canning has more than 30 years’ experience in the global satellite communications and telecommunications industries. In this position, he is responsible for the dayto- day operations of the Company along with leading the development of the innovative solutions OneWeb Technologies brings to market.
Prior to joining OneWeb Technologies, Mr. Canning held many senior management positions for Stratos Global Corp., the leading global provider of advanced mobile and fixed-site remote communications solutions. He was responsible for Stratos’ global product and marketing portfolio, generating more than $700 million from advanced remote communications solutions, including the Stratos Advantage range of value-added services.
Positions held during his tenure at Stratos included Vice President, Global Product Marketing (2010-11), Vice President, Marketing and Product Management (2007-10), Stratos’ Managing Director, EMEA and Vice President, Sales (2004-07), and Director, EMEA and Asia (1999-2001).
In addition, Mr. Canning’s experience includes posts with other leading players in the commercial satellite communications industry. From 2001-03, Mr. Canning served as Director, Business Development EMEA and later as Director, Sales EMEA for Iridium Communications Inc. From 1995-99, Mr. Canning served as Manager, Partnership Program for Inmarsat. From 1983-99, he held senior sales positions with a variety of electronics and telecommunications companies, including Nortel Networks, Motorola Codex, and Racal Datacom.
Mr. Canning earned an MBA from London’s Greenwich School of Management.John Gedmark
John Gedmark is CEO and Co-Founder of Astranis. Astranis builds and operates small, low-cost telecommunications satellites with the mission to bring the world online. The company has raised over $350 million to date, and has a team of over 250 based out of their San Francisco headquarters.
John co-founded and served as Executive Director of the Commercial Spaceflight Federation, the industry association for commercial space companies such as SpaceX, Blue Origin, and Virgin Galactic. As Executive Director, John reported to a CEO-level board of directors and led the commercial space industry’s efforts to privatize flights of NASA’s astronauts to low Earth orbit — in February 2010 President Obama announced the historic decision to use commercial space transportation, a landmark change worth more than $10 billion to the commercial space industry. Prior to that, John served as the Director of Rocket Flight Operations for the X Prize Foundation, responsible for operations of rocket launches in front of a crowd of 20,000 people, including the first-ever public flight of a Vertical Take-off Vertical Landing (VTVL) rocket. John holds a Bachelor of Science degree from Purdue University and a Master of Science degree from Stanford University, both in Aerospace Engineering, with a focus on rocket propulsion.Richard Hadsall
|
|
Richard Hadsall is one of that rare breed of technologists who is also a successful company founder and leader. Crescomm Transmission Services, launched in 1976, was his first venture, which evolved in 1981 into Maritime Telecommunications Network or MTN. Five years later, Richard developed a technology that would forever transform communications at sea: the motion-stabilized VSAT antenna, which could maintain its lock on a spacecraft 22,000 miles away while a ship pitched and rolled underneath it. Under his technology leadership, MTN pioneered a unique business model, in which the company became the communications partner of its government and cruise line customers, and introduced a series of passenger and crew services that generated revenue shared by the cruise line and MTN. Success with cruise lines allowed the company to expand into other maritime markets including ferries, private yachts, oil & gas vessels and commercial ships. This ultimately led to its acquisition, in 2015, by EMC.
Though he is known as the “father of maritime VSAT,” stabilizing an antenna was only one of Richard’s many technology “firsts.” He pioneered the use of C- and Ku-band broadband at sea for delivering voice, Internet and video. His work enabled the first live broadcast from a nuclear submarine for ABC’s “Good Morning America,” and a live uplink from a moving Amtrak train for the program’s week-long “Whistle Stop” coverage of the 2008 Presidential
election. In 2011, he became one of the few satellite engineers to receive an Emmy Award for retrofitting a Ford F350 pickup into the “Bloom-Mobile,” a satellite-based mobile communications platform that allowed the late NBC reporter David Bloom to broadcast live coverage of the War in Iraq while moving across the Iraqi desert at speeds up to 50 mph.
When asked about his long and entrepreneurial career in the industry, Richard said, “Having the opportunity to pioneer the merging of satellite and communications technology more than three decades ago has led to very a satisfying and productive career.
Walter Moffitt is Chief Architect of Inmarsat Government Inc., a wholly owned subsidiary of Inmarsat Group Holdings Limited, and the world’s leading provider of global mobile satellite communications to the United States Government.
Mr. Moffitt provides the Joint Force user perspective to Inmarsat Government helping to define the way ahead for satellite communications going forward. He is responsible to ensure the future satellite architecture that Inmarsat Government is creating is in line with government user needs.
Prior to this role, Mr. Moffit served in the United State Air Force as a tactical radio and satellite communications maintenance technician and operator for the Joint Communications Unit for 13 years. Mr. Moffitt finished government service as a career civil servant supporting the Joint Special Operations Command (JSOC) as the senior development chief responsible for RF communications development and innovation for tactical line-of-site radios, data links, and satellite communications systems and architectures over a period of 22 years.
Mr. Moffit holds a Bachelor of Science degree from Methodist College.Melanie Preisser
Melanie Preisser is Vice President of National Systems for York Space Systems in Washington, D.C. Her responsibilities focus on building relationships and positive interactions with agencies and key officials in the U.S. Government and among U.S. and international customers and partners. Ms. Preisser has been a crucial figure in building York’s national defense business unit — scaled from a purely commercial segment capability and now executing on numerous defense and civil missions for the Space Force, the Space Development Agency, and lunar endeavors. Melanie and her team are committed to changing the way industry thinks about space and to making space more accessible, affordable, and faster.
Ms. Preisser brings over 25 years of diverse corporate and government experience in strategy and business development, acquisitions, systems engineering, satellite operations, and test and evaluation of space-based and airborne systems. She began her career as an acquisitions officer and project engineer for the United States Air Force and retired from the Air Force in 2014. In 2021, Ms. Preisser was selected as one of the “Top 30 Space Execs to Watch” by the WashingtonExec Magazine.
Prior to joining York, Ms. Preisser was Vice President of Government Relations for Stratolaunch Systems Corporation. She previously served with the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics at the Pentagon in Washington, D.C., where she was responsible for acquisition oversight and coordination of major strategic, space and intelligence programs.
Ms. Preisser holds a bachelor’s degree in Electrical Engineering and master’s degrees in Systems Engineering and Business Administration. | https://2022.milsatshow.com/sessions/strategic-benefits-and-new-developments-in-mobile-satcom-for-military-end-users/ |
Fred is one of Britain’s most respected oil painters who responds to mood and atmosphere generated by the landscape.
His usual method of working involves making a collection of drawings, sketches and paintings which he then uses as inspir
Collection of tutorials Visual Storytelling Part 1,2,3,4 - Iain McCaig
| 2.54 Gb
Creature Design with Terryl Whitlatch Vol. 4: Toad-ogre Creature Concept and Story
English | Video: vp6f, yuv420p, 853x480, 2706 kb/s | Audio: mp3, 22050 Hz, s16 | 1.25 GB
Genre: Video training
In this fourth DVD of a series, Terryl Whitlatch demonstrates how to create a monstrous villain, the Toad-Ogre, as well as other assorted creature characters suitable for film, animation and video games. This is a first-hand look at how creatures ar
Creature Design with Terryl Whitlatch Vol. 2: Avian Creature, "White Fright" | http://filesfeed.com/art-drawing-painting/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.